Experiments are supposed to be straightforward for estimating causal effects. But they often go wrong. Scholars sometimes distort treatment effect estimates by conditioning on variables manipulated in the experiment.
What scholars do wrong:
* Controlling for posttreatment variables in statistical models
* Eliminating observations based on posttreatment criteria
* Subsetting data using posttreatment measures
The problem:
These practices can bias estimates. The paper shows this isn't just a minor issue; it's widespread, even appearing frequently in top political science journals.
How we found out:
Researchers demonstrate the severity analytically and document potential distortions using visualizations and reanalyses of real-world data.
What you should do instead:
Provide practical recommendations for best experimental practices.