
๐ How the simulations tested list design and respondent error
Monte Carlo experiments vary list-experiment design choices and introduce small amounts of non-strategic respondent error to evaluate two estimators: the item count technique maximum likelihood estimator (ICT-MLE) and the simple difference-in-means. The simulations focus on conditions created by common best practices that reduce strategic misrepresentation and thus produce very few respondents at the extrema (choosing no items or all items).
๐ What the analysis compares
- Estimators: ICT-MLE (item count technique) versus simple difference-in-means.
- Manipulations: list composition and the frequency of extreme responses in the treatment group.
- Errors: small, non-strategic respondent measurement error (not intentional misreporting).
๐ Key findings
- Performance depends on list design: ICT-MLEโs accuracy and stability are sensitive to how lists are constructed; difference-in-means is less list-dependent.
- Both estimators are vulnerable to measurement error, but ICT-MLE suffers more severely because it relies on the โno liarsโ identification assumption.
- As the number of treatment-group respondents who report all items falls (a common outcome of best practices that deter strategic misreporting), ICT-MLE estimation and its computational strategy become difficult to sustain and can produce extreme problems.
โ ๏ธ When estimators break down
- Sparse extrema (very few respondents reporting no/all items) undermine both identification and numerical estimation for ICT-MLE.
- Even small, non-strategic errors can induce substantial bias and instability in ICT-MLE estimates; difference-in-means is affected too but to a lesser degree.
๐ ๏ธ Practical guidance and next steps
- Results indicate the need for careful list design and assessment of extrema frequencies before relying on ICT-MLE.
- Applied researchers should check sensitivity to measurement error and consider simpler estimators when extrema are rare.
- Directions for future research include methodological adjustments to ICT estimators and diagnostic tools to detect when list-experiment conditions make ICT-MLE unreliable.
Why this matters: Common survey practices intended to limit strategic lying can unintentionally create conditions that invalidate or destabilize the ICT-MLE, with real-world applications showing these problems can and do occur.