FIND DATA: By Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | IR | Law & Courts๐ŸŽต
   FIND DATA: By Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts๐ŸŽต
WHAT'S NEW? US Politics | IR | Law & Courts๐ŸŽต
If this link is broken, please report as broken. You can also submit updates (will be reviewed).

When List-Experiment Estimators Break Down: Small Errors, Big Biases

Methodology subfield banner

๐Ÿ”Ž How the simulations tested list design and respondent error

Monte Carlo experiments vary list-experiment design choices and introduce small amounts of non-strategic respondent error to evaluate two estimators: the item count technique maximum likelihood estimator (ICT-MLE) and the simple difference-in-means. The simulations focus on conditions created by common best practices that reduce strategic misrepresentation and thus produce very few respondents at the extrema (choosing no items or all items).

๐Ÿ“Š What the analysis compares

  • Estimators: ICT-MLE (item count technique) versus simple difference-in-means.
  • Manipulations: list composition and the frequency of extreme responses in the treatment group.
  • Errors: small, non-strategic respondent measurement error (not intentional misreporting).

๐Ÿ“ˆ Key findings

  • Performance depends on list design: ICT-MLEโ€™s accuracy and stability are sensitive to how lists are constructed; difference-in-means is less list-dependent.
  • Both estimators are vulnerable to measurement error, but ICT-MLE suffers more severely because it relies on the โ€œno liarsโ€ identification assumption.
  • As the number of treatment-group respondents who report all items falls (a common outcome of best practices that deter strategic misreporting), ICT-MLE estimation and its computational strategy become difficult to sustain and can produce extreme problems.

โš ๏ธ When estimators break down

  • Sparse extrema (very few respondents reporting no/all items) undermine both identification and numerical estimation for ICT-MLE.
  • Even small, non-strategic errors can induce substantial bias and instability in ICT-MLE estimates; difference-in-means is affected too but to a lesser degree.

๐Ÿ› ๏ธ Practical guidance and next steps

  • Results indicate the need for careful list design and assessment of extrema frequencies before relying on ICT-MLE.
  • Applied researchers should check sensitivity to measurement error and consider simpler estimators when extrema are rare.
  • Directions for future research include methodological adjustments to ICT estimators and diagnostic tools to detect when list-experiment conditions make ICT-MLE unreliable.

Why this matters: Common survey practices intended to limit strategic lying can unintentionally create conditions that invalidate or destabilize the ICT-MLE, with real-world applications showing these problems can and do occur.

Article card for article: List Experiment Design, Non-Strategic Respondent Error, and Item Count Technique Estimators
List Experiment Design, Non-Strategic Respondent Error, and Item Count Technique Estimators was authored by John S. Ahlquist. It was published by Cambridge in Pol. An. in 2018.
Find on Google Scholar
Find on JSTOR
Find on CUP
Political Analysis
Edit article record marker