FIND DATA: By Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | IR | Law & Courts🎵
   FIND DATA: By Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts🎵
WHAT'S NEW? US Politics | IR | Law & Courts🎵
If this link is broken, please
You can also
(will be reviewed).

AI Reads Politics: LLMs Match Experts on Party Positions and Coalitions

Machine Learningnatural language understandingText Analysispolitical methodologyparty positionsgovernment formationMethodology@AJPS6 DatasetsDataverse
Methodology subfield banner

Why This Matters

Political scientists rely on reading political texts to infer party positions, policy priorities, and likely coalition behavior, but human coding is costly and slow while automated text-as-data methods rest on strong assumptions. Kenneth Benoit, Scott De Marchi, Conor Laver, Michael Laver, and Jinshuai Ma test whether large language models (LLMs) can provide a scalable, interpretable middle ground by performing Natural Language Understanding (NLU) on political documents.

What the Authors Do

The authors develop a systematic, replicable workflow that uses ensembles of LLM outputs to interpret political texts as meaningful statements about actors and issues rather than treating text purely as token counts. They apply this method to estimate party positions on six key issue dimensions and to classify content of coalition policy declarations.

How the Test Was Set Up

  • LLM-generated position estimates were aggregated into ensemble means for each party and issue dimension.
  • Those ensemble estimates were compared to mean ratings provided by country specialists (human experts) to assess convergent validity.
  • LLM-derived readings of coalition policy declarations were also compared with hand-coded declarations to evaluate how well each approach aligns with standard models of government formation.

Key Findings

  • Ensemble means of LLM-generated estimates correlate highly with equivalent mean ratings from country specialists, indicating strong agreement with expert human judgment.
  • When applied to coalition policy declarations, LLM-based estimates align more closely with standard models of government formation than hand-coded estimates do, suggesting improved external validity for certain political outcomes.
  • The proposed LLM workflow is presented as scalable and replicable, offering a practical extension to both qualitative NLU and quantitative text-as-data approaches.

Implications for Political Science

Benoit et al. argue that modern LLMs can reduce the trade-off between the depth of human qualitative coding and the scalability of statistical text methods. Their results suggest LLMs can reliably approximate expert judgment and may improve measurement for research on party positions and coalition formation. The paper concludes with a discussion of methodological opportunities and cautionary notes about limitations and future validation needs.

Article card for article: Using Large Language Models to Analyze Political Texts through Natural Language Understanding
Using Large Language Models to Analyze Political Texts through Natural Language Understanding was authored by Kenneth Benoit, Scott De Marchi, Conor Laver, Michael Laver and Jinshuai Ma. It was published by Wiley in AJPS in 2025.
Find on Google Scholar
Find on Wiley
American Journal of Political Science