Bayesian Splines – Motivating Use in Excess Mortality Estimation in Time Series Analysis

biostatisticsinferencemortalitysplinestime series

I'm reading this paper estimating excess deaths induced by the pandemic. That is, roughly, it constructs a model to estimate how many deaths (from all causes) would have occurred if the pandemic had not occurred, using historical mortality data. Then it extrapolates this forward into 2021, and compares that prediction to the actual number of deaths that occurred, where the pandemic did happen, of course. The model used for expected all-cause mortality is a Bayesian spline. Now, I'm an undergraduate that knows what Bayesian inference is, and what a spline is, but not what a Bayesian spline is. I'll outline the model and just wanted to know why this is approppriate for this setting, as opposed to a simple spline (piecewise polynomial regression) fitting on deaths over time historically.

The model in the paper has deaths, $d$, given by $d \sim \text{Poisson}(\mu)$, where $\mu \sim \text{Exp}(\log p + \text{spline}(t))$, where $p$ is population of the country and $t$ is time. This to me, is a heirarchical model, where the number of deaths (say time is indexed weekly) per week is a random event distributed Poisson (unclear why we don't just regress to spline straight up). And then to choose the Poisson rate parameter, we use an exponential model (unclear to me why); including $\log p$ seems sensible as deaths will be proportional to population and we want to correct for that, but I don't understand quite how the spline within the exponential is working to determine the rate parameter. Could someone explain why this is a sensible, motivated model for deaths at a particular point in time? Or just broadly motivate what I guess are "Bayesian splines"? Thank you!

Best Answer

The death rate can't be negative (the pandemic was bad but it wasn't zombie apocalypse bad), so a natural way to enforce that is to fit an additive/linear model on the log scale (hence why the model has offset $\log p$ and not simply $p$), and then map back to the interval $[0, \infty]$ via the inverse of the log, the exponential function.

This is the common formulation for GLMs, where the $\log$ would be the link function and $\exp$ its inverse.

The authors don't really explain what they mean by a Bayesian spline; typically in this kind of framework we choose a spline basis of a given size and then use a penalized fit to shrink the coefficients of the spline to minimise a penalized fit criterion with penalty on the wiggliness of the estimated spline. In a Bayesian context this penalty can be thought of as a prior on the wiggliness of the spline, which can also be thought of as gaussian priors on the coefficients (IIRC).

Related Question