I understand that the Jeffreys prior is invariant under re-parameterization. However, what I don't understand is why this property is desired.

Why wouldn't you want the prior to change under a change of variables?

Skip to content
# Jeffreys Prior – Why It Is Useful for Bayesian Analysis

bayesianprior

I understand that the Jeffreys prior is invariant under re-parameterization. However, what I don't understand is why this property is desired.

Why wouldn't you want the prior to change under a change of variables?

## Best Answer

Let me complete Zen's answer. I don't very like the notion of "representing ignorance". The important thing is not the Jeffreys prior but the Jeffreys

posterior. This posterior aims to reflect as best as possible the information about the parameters brought by the data. The invariance property is naturally required for the two following points. Consider for instance the binomial model with unknown proportion parameter $\theta$ and odds parameter $\psi=\frac{\theta}{1-\theta}$.The Jeffreys posterior on $\theta$ reflects as best as possible the information about $\theta$ brought by the data. There is a one-to-one correspondence between $\theta$ and $\psi$. Then, transforming the Jeffreys posterior on $\theta$ into a posterior on $\psi$ (via the usual change-of-variables formula) should yield a distribution reflecting as best as possible the information about $\psi$. Thus this distribution should be the Jeffreys posterior about $\psi$. This is the invariance property.

An important point when drawing conclusions of a statistical analysis is

scientific communication. Imagine you give the Jeffreys posterior on $\theta$ to a scientific colleague. But he/she is interested in $\psi$ rather than $\theta$. Then this is not a problem with the invariance property: he/she just has to apply the change-of-variables formula.