FIND DATA: By Author | Journal | Sites   ANALYZE DATA: Help with R | SPSS | Stata | Excel   WHAT'S NEW? US Politics | Int'l Relations | Law & Courts
   FIND DATA: By Author | Journal | Sites   WHAT'S NEW? US Politics | IR | Law & Courts
If this link is broken, please report as broken. You can also submit updates (will be reviewed).
When Subgroup Comparisons Mislead: The Problem With Conditional AMCEs
Insights from the Field
Conjoint
AMCE
Interactions
Marginal Means
F-test
Methodology
Pol. An.
5 text files
5 datasets
Dataverse
Measuring Subgroup Preferences in Conjoint Experiments was authored by Thomas Leeper, Sara B. Hobolt and James Tilley. It was published by Cambridge in Pol. An. in 2020.

๐Ÿ” What This Paper Shows

Conjoint analysis disentangles how different features of multidimensional objects (like candidates or policies) affect respondent support. The average marginal component effect (AMCE) โ€” estimated from fully randomized conjoint designs โ€” has a clear causal interpretation about the effect of a feature value relative to a baseline. However, using conditional AMCEs to describe levels of subgroup favorability can be misleading: regression interactions used to produce subgroup AMCEs are sensitive to the reference category, producing subgroup differences whose sign, magnitude, and statistical significance can be arbitrary.

๐Ÿงพ How Preferences Are Typically Measured

  • Conjoint designs randomize profile features and estimate AMCEs to show how a given feature value increases or decreases support relative to a baseline.
  • AMCEs are averaged across respondents and other profile features and are commonly reported as both causal effects and descriptive levels of favorability.
  • Many published studies compare AMCEs across respondent subgroups by estimating conditional AMCEs via regression interactions.

โš ๏ธ The Problem With Conditional AMCEs

  • Regression interaction estimates change with the choice of reference category in the analysis.
  • This sensitivity can produce subgroup comparisons with arbitrary sign, size, and statistical significance, undermining the descriptive claims researchers intend to make about subgroup agreement or disagreement.
  • The issue is demonstrated with examples drawn from published articles, showing how interpretations can vary solely because of coding choices rather than substantive differences.

โœ… Better Ways To Report and Test Subgroup Differences

  • Use marginal means (and their confidence intervals) to describe levels of favorability for subgroups rather than relying on interaction coefficients alone.
  • Use an omnibus F-test to assess whether profiles or AMCEs differ across subgroups before interpreting pairwise contrasts.
  • Report coding choices and conduct sensitivity checks to show robustness of subgroup comparisons.

๐Ÿ“Œ Why This Matters

  • Conjoint designs are increasingly popular in political science. Clear guidance on presentation and inference helps avoid misleading claims about subgroup preferences.
  • Best practices include emphasizing marginal means, reporting uncertainty, running omnibus tests for subgroup differences, and making coding decisions transparent so readers can judge the stability of reported subgroup effects.
data
Find on Google Scholar
Find on JSTOR
Find on CUP
Political Analysis
Podcast host Ryan