[Math] Bayesian posterior variance

bayesianprobabilitystatistics

Let $Var[\omega]$ be the variance of a population parameter $\omega$ prior to the collection of a random sample $\mathcal{X}=\left\lbrace X_1,X_2,\dots,X_n\right\rbrace$ from the population. Prove or disprove the claim that the posterior variance is, on average, less than or equal to the prior variance.

This is a homework problem for introduction to Bayesian statistics. I want to say that the claim is false, because I could pick any prior distribution that I want, and if I picked a distribution with a super tiny variance, it is likely that the posterior variance would actually be larger.

I just think the wording of the problem is a bit vague. Any insights would be appreciated.

Best Answer

What we want to show is that $E\left[Var(\omega|X_{1},..,X_{n})\right]\leq Var(w)$. Well by the law of total variance we have that $$Var(w)=E\left[Var(\omega|X_{1},..,X_{n})\right]+Var(E[\omega|X_{1},..,X_{n}])$$ and, since $Var(Y)\geq 0$ for any rv $Y$, $Var(E[\omega|X_{1},..,X_{n}])\geq 0$, hence $$Var(w)\ge E\left[Var(\omega|X_{1},..,X_{n})\right]$$

Related Question