
๐งพ What Dressel and Farid Reported
Dressel and Farid (2018) used responses from Amazon Mechanical Turk participants asked to predict recidivism and argued that nonexperts can match algorithmic approaches in both predictive accuracy and fairness. Their claim has been taken to suggest broad doubts about the value of algorithmic recidivism prediction.
๐ How the Same Data Was Reassessed
Analyzing the original dataset from the Dressel and Farid study, additional evaluation techniques were applied to compare the outputs of statistical learning procedures with the MTurkers' assessments. The reanalysis focuses on the quality of predicted probabilities produced by models versus the judgments provided by nonexpert respondents.
๐ Key Comparisons and Evidence
๐ Main Findings
โ๏ธ Why This Matters
These reanalyses clarify that careful evaluation of probabilistic outputs can reveal advantages of statistical models missed by headline comparisons. For policy and research on recidivism prediction, comparing the full distributional and calibration properties of model predictions โ not just raw accuracy โ changes the inference about whether algorithms are meaningfully inferior to nonexperts.

| Can Non-Experts Really Emulate Statistical Learning Methods? was authored by Kirk Bansak. It was published by Cambridge in Pol. An. in 2019. |
