Surveys often use list experiments to reduce strategic misreporting on sensitive items, but this technique can introduce other kinds of error. This study provides the first empirical test of the trade-off between strategic and nonstrategic misreporting in practice, using validated measures of true turnout.
📊 How turnout was measured and compared
Fielded list experiments on election turnout in two countries while also collecting independent measures of respondents’ true turnout. These true scores enable a direct comparison between list-experiment responses and direct-question responses.
🔍 How partition validation identifies reporting errors
The study details and applies a partition validation method that uses true scores to classify list-experiment responses into: true positives, false positives, true negatives, and false negatives. This classification isolates nonstrategic reporting errors that standard diagnostics miss.
📈 Key findings
- For both list experiments, partition validation uncovers nonstrategic misreporting that standard diagnostics or conventional validation fail to detect.
- The magnitude of nonstrategic error is larger than assumed in existing simulation studies.
- The nonstrategic errors are large enough that, despite strategic misreporting on direct turnout questions, direct questions show lower overall reporting error than the list experiments.
🧭 Why this matters
These results show that the choice between list experiments and direct questions depends on the balance between strategic and nonstrategic errors in a given topic and survey context. The partition validation approach provides a practical tool for researchers to assess that trade-off when true scores are available, informing more accurate survey design choices.