
Published findings can be compromised when researchers fail to report and adjust for multiple testing. Preregistration and preanalysis plans are proposed remedies, but critics worry these rules could hinder inductive learning. Without clear knowledge of how often researchers underreport design features, it is difficult to weigh the costs and benefits of such institutional reforms.
🔍 What Was Compared:
Published survey experiments run through the Time-sharing Experiments in the Social Sciences (TESS) program were examined. Because TESS questionnaires are made publicly available, it is possible to compare planned design features recorded in the questionnaires with what appears in the corresponding published papers.
📊 Key Findings:
⚠️ Why It Matters:
These rates of underreporting indicate that published statistical tests likely understate the probability of Type I errors. The evidence quantifies the extent of selective reporting in survey experiments and thereby informs the debate over preregistration and preanalysis plans: without knowing how often design features go unreported, assessing the trade-offs of reforms that limit post hoc exploration remains difficult.

| Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results was authored by Annie Franco, Neil Malhotra and Gabor Simonovits. It was published by Cambridge in Pol. An. in 2015. |
