Sensitive survey answers are hard to study because truthful responses cannot be directly observed when some people misreport. This paper introduces a statistical solution that makes it possible to identify respondents who give one answer inside a list experiment but reply differently when asked the same question directly.
🧭 How misreporting is identified
A method is developed to model discrepancies between indirect (list experiment) responses and direct-question responses within a multivariate regression framework. Key features include:
- Models whether a respondent reports the sensitive item in the list experiment but denies or changes that response on a direct question
- Incorporates covariates in a regression setting so misreporting can be predicted while controlling for other factors
- Designed for use with typical list-experiment designs and standard survey covariates
📊 Applied test: a large-scale list experiment on prejudice
The method is applied to an original, large-scale list experiment that compares indirect and direct measures of prejudiced beliefs. The empirical application investigates whether respondents on the ideological left are more likely to misreport answers about prejudice than respondents on the right.
🛠️ Tools for researchers
The estimation approach is made available as open-source software so other researchers can implement the method on similar survey designs.
🔎 Why it matters
This approach addresses a core measurement problem in surveys of sensitive attitudes by providing a way to distinguish truthful reporting from misreporting in typical regression analyses. That capacity enables clearer tests of who hides stigmatized beliefs and improves inference about attitudes, beliefs, and behaviors that are vulnerable to social desirability bias.






