Recent advances in generative large language models pose immediate questions for how political science is taught, learned, and assessed. This article investigates those implications by analyzing two novel surveys conducted through the discipline’s major professional body, the American Political Science Association (APSA), and closes with targeted recommendations.
🔎 How the Evidence Was Gathered
- Two novel surveys administered by the American Political Science Association (APSA)
- Surveys designed to capture experiences, attitudes, and reported practices related to generative large language models in political science education
📌 Topics Explored in the Surveys
- Teaching practices and classroom use of generative AI
- Student learning, skills, and interaction with large language models
- Academic assessment, grading, and concerns about academic integrity
🧾 What the Article Does
- Presents the full results of the two APSA surveys and analyzes patterns across responses
- Synthesizes implications for pedagogy, assessment design, and departmental policy
- Concludes with recommendations aimed at instructors, departments, and professional associations on adapting teaching and assessment in the era of generative AI
⚖️ Why This Matters
- Findings inform immediate classroom decisions and longer-term curricular planning
- Results offer guidance for balancing pedagogical innovation with standards for academic integrity
- Evidence from a major disciplinary body helps shape collective responses across institutions and the profession