
Why Method Evaluation Matters
Jeff Harden, Anand Sokhey, and Hannah Wilson address a growing practical problem in quantitative political science: how to assess whether a new statistical or computational method actually advances knowledge rather than producing misleading results in specific settings. As methods proliferate, simple replications that mechanically rerun code are necessary but not sufficient for judging whether a method is appropriate across different substantive questions, data environments, and researcher choices.
What the Framework Does
The authors propose a clear, principled way to evaluate new quantitative methods that emphasizes context. Rather than treating replication as a single yes/no exercise, the framework treats replication as a set of tests about a method’s reliability, robustness, and substantive fit—asking when, why, and for which research tasks a method can be trusted.
How the Framework Works
The paper lays out concrete criteria and steps for evaluation, including:
Evidence and Approach
Harden, Sokhey, and Wilson build their argument by synthesizing methodological literature and drawing illustrative examples from common quantitative workflows. The paper emphasizes practical tools—checklists, diagnostics, and reporting practices—that researchers, reviewers, and editors can adopt to make evaluations systematic and comparable.
Why This Matters for Political Science
This framework reframes replication from a single procedure into a disciplined, context-sensitive process that better captures the strengths and limits of new methods. It offers actionable guidance for authors introducing methods, for reviewers judging methodological claims, and for instructors training students in best practices—helping the field adopt innovations while safeguarding inferential credibility.

| Replications in Context: A Framework for Evaluating New Methods in Quantitative Political Science was authored by Jeff Harden, Anand Sokhey and Hannah Wilson. It was published by Cambridge in Pol. An. in 2019. |