
📰 What Was Tested
A large, representative sample (N = 5,750) of American respondents was exposed to fabricated videos of public officials synthesized by deep learning (“deepfakes”) and to comparable misinformation in other formats (text headlines, audio). Two experiments used a novel collection of deepfakes created in collaboration with tech-industry partners to compare credibility judgments, affective reactions, and detection accuracy across media types.
🔬 How Exposure and Detection Were Measured
📊 Key Findings
💡 Why It Matters
Deepfakes are widely believable to many viewers, but they are not uniquely persuasive compared with other forms of misinformation. The strongest mitigators of mistaken belief are broader political and digital literacy rather than short informational prompts, and partisan bias shapes detection of authentic political video more than it shapes detection of fabricated video. These results come from two randomized experiments using industry-produced deepfakes and have direct implications for misinformation policy and media-literacy interventions.

| Political Deepfakes Are as Credible as Other Fake Media and (sometimes) Real Media was authored by Soubhik Barari, Christopher Lucas and Kevin Munger. It was published by Chicago in JOP in 2025. |