🔍 The Challenge
When samples are small or events are rare, explanatory variables can perfectly predict outcomes (separation) in binary models. In those situations, maximum likelihood estimation gives implausible or undefined estimates, and researchers often turn to priors to stabilize inference.
đź§ Common Fix and Its Limits
Jeffreys’ invariant prior is widely used because it is automatic and stabilizes estimates. However, Jeffreys’ prior can inject more information than intended: it frequently produces smaller point estimates and narrower confidence intervals than even highly skeptical priors.
🛠️ A New Diagnostic: Partial Prior Distribution
To help assess how much information a prior contributes, the concept of a partial prior distribution is introduced. This concept and accompanying tools make it possible to:
- Compute the partial prior distribution of quantities of interest
- Re-estimate the logistic model with that partial prior in place
- Summarize how much the prior shifts estimates and uncertainty
📌 Why It Matters
The partial prior distribution gives a practical way to measure and communicate the amount of prior information injected into separated logistic regressions. This allows researchers to judge whether an "automatic" prior like Jeffreys’ is overly informative for a given application and to compare its influence with more skeptical priors.






