Solved – Bayesian inference and degrees of freedom

bayesiandegrees of freedomhierarchical-bayesianidentifiability

While learning frequentist linear regressions, one thing the professors always talked about was about the number of degrees of freedom, I never saw this expression in a bayesian book though. Perhaps because bayesian methods don't need this number to infer things like variance and such?

My question is: is the number of degrees of freedom equals the number of parameters in an hierarchical bayesian model and if it's not, is there something equivalent one can calculate? In particular, I'm interested in when a model is overidentified in a hierarchical framework.

For example, if I have 1000 observations and about 10 possible competing models with about 100 parameters each, if mix them all in an hierarchical model using, for example, trans-dimensional MCMC/Bayes factor, the will I have an overidentified model?

My intuition says that it's possible that it won't, although the total number of parameters is greater than the number of observed parameters.

Best Answer

At least from a theoretical point of view, identifiably is not important from a Bayesian perspective. If the data is not informative about some parameters under the model then the posterior of those parameters will just be highly influenced by the prior.

From a practical point of view if the posterior is broad then approximate methods such as MCMC will take longer maybe much longer to run.

Another practical problem is that if you have a large parameter space and little data as it sounds like you do then the results, if you can manage to compute them, are likely to be very sensitive to prior specification.

Related Question