FIND DATA: By Author | Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | Int'l Relations | Law & Courts
   FIND DATA: By Author | Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts
If this link is broken, please report as broken. You can also submit updates (will be reviewed).
Insights from the Field

There’s No Magic Formula: Expert Reliability Falls Flat Across Many Characteristics


expert reliability
latent traits estimation
ordinal IRT model
coder-level data
Methodology
R&P
1 PDF files
2 archives
Dataverse
What Makes Experts Reliable? Expert Reliability and the Estimation of Latent Traits was authored by Kyle Marquardt, Daniel Pemstein, Brigitte Seim and Yi-ting Wang. It was published by Sage in R&P in 2019.

Coding latent concepts in political science often relies on expert judgment, but scholars haven't thoroughly examined how these experts' traits impact reliability or the effects of such factors. This study tests a template using coder-level data for six variables from a cross-national panel dataset. It aggregates this data via an ordinal item response theory model that specifically estimates expert reliability.

By regressing these reliability estimates against both expert demographic characteristics and their coding behavior patterns, we find minimal evidence linking most traits—including gender—to consistent differences in reliability. While intuitive factors like contextual knowledge do improve performance slightly, the null findings suggest that expert demographics alone are not strong predictors of quality.

This reinforces item response theory models as a robust approach for aggregating expert-coded data across various political science contexts.

data
Find on Google Scholar
Find on JSTOR
Find on Sage Journals
Rsearch & Politics
Podcast host Ryan