🔎 What's the measurement problem?
Multiple-choice and open-ended formats are both widely used to measure political knowledge. Conventional wisdom holds that multiple-choice items provoke guessing, which can lead to underestimated item-difficulty parameters and biased estimates of respondents' political knowledge.
🧩 How guessing is reconceptualized
Argues that a successful guess is not purely random but requires a certain level of knowledge that interacts with item difficulty. In other words, guessing probability depends on both respondent knowledge and item characteristics rather than being a flat chance across respondents.
🛠️ What was modeled and how
Proposes a Bayesian item response theory (IRT) guessing model that explicitly accommodates guessing components in item responses. The model lets guessing vary with respondent knowledge and item features instead of treating guessing as uniform noise.
📊 Applied to survey data from Taiwan
The model is fit to Taiwanese survey responses to political knowledge items. The analysis finds that the proposed model better captures guessing behavior by linking it to respondents' knowledge levels and item characteristics.
✨ Key findings
- Partially informed respondents are most likely to make a successful guess.
- Well-informed respondents rarely need to guess.
- Barely informed respondents are highly susceptible to attractive distractors and thus less likely to guess correctly.
- Accounting for guessing does not erase the gender gap: men remain more knowledgeable than women about political affairs, consistent with existing literature.
⚖️ Why it matters
Results imply that multiple-choice guessing is structured by knowledge and item difficulty, so measurement models that ignore this structure can misestimate item difficulty and respondent knowledge. Incorporating conditional guessing provides a clearer picture of who benefits from multiple-choice formats and how to interpret gender and other group differences in political knowledge.






