
Why Long-Run Multipliers Are Hard to Trust
Many empirical projects aim to estimate long-run multipliers (LRMs)βthe total effect of a change in an independent variable on a dynamically evolving dependent variable. But when outcomes follow autoregressive or integrated processes, conventional inference about LRMs becomes fragile: short time series and ambiguous tests for stationarity make standard confidence intervals unreliable and leave long-run relationships unclear.
A Bayesian Bounded-Prior Solution
Mark Nieman and David Peterson propose a practical Bayesian framework that directly targets the LRM and its uncertainty. Their approach places a bounded prior on the autoregressive coefficient on the lagged dependent variable, constraining the dynamic parameter to a plausible range that accommodates both stationary and integrated series. Posterior draws are then transformed into the LRM, yielding a credible region that reflects uncertainty about dynamics even when samples are short.
Evidence: Simulations and Replication
What This Means for Researchers
The proposed method supplies direct, interpretable estimates of long-run multipliers with calibrated credible intervals and works particularly well when time series are short or unit-root tests are ambiguous. Nieman and Peterson provide a feasible alternative for scholars who need reliable inference about cumulative dynamic effects without depending on fragile stationarity decisions.

| Long-Run Confidence: Estimating Uncertainty when using Long-Run Multipliers was authored by Mark Nieman and David Peterson. It was published by Wiley in AJPS in 2026. |