Machine Learning – Computing Bayesian Estimator with Jeffreys Prior for Gamma Distribution

bayesiandecision-theoryestimationjeffreys-priormachine learning

Question:

Let $X_1, · · · , X_n$ be a random sample from $Gamma(1, θ)$. The population mean is $θ$. Assume that the Jeffreys prior is used.

  1. Find the generalized Bayesian estimator of θ under the SEL (squared error loss).

  2. Find the Bayesian estimator (rule) of θ under loss $L(θ, a) = (a/θ − 1)^2$

Solution:

My attempt:
Since The Jeffreys prior is the square of the Fisher information of $θ$:

$p(θ)=\frac{\sqrt(1)}{θ}$.

Then using Bayes' rule we have,

$$\begin{align*}p(θ|x)&\propto \dfrac{θ^1}{\Gamma (1)}x^{1-1}e^{-xθ}\cdot\dfrac{\sqrt{1}}{θ} \\ &\propto θ^{1-1} e^{-xθ} \\ &\propto \dfrac{x^1\,θ^{1-1}}{\Gamma(1)}\,e^{-xθ}\end{align*}$$
which shows the posterior is a Gamma $\mathcal{G}(1,x)$ distribution.

Now how can I proceed further to compute the Bayesian estimator, and generalized Bayesian estimator under the stated loss functions?

Best Answer

In order to find a Bayesian Estimator for a loss function, we need to minimize the posterior expected loss $E[loss(\theta, \hat{\theta})|x]$, i.e., solve the optimization problem $\underset{\hat{\theta}}{min}\int\limits_{0}^{\infty}loss(\theta,\hat{\theta})p(\theta|x)d\theta$.

(1) We have to solve the optimization problem $\underset{\hat{\theta}}{min}\int\limits_{0}^{\infty}(\theta-\hat{\theta})^2p(\theta|x)d\theta$, for SEL $loss(\hat{\theta}, \theta)=(\theta-\hat{\theta})^2$.

First we need to compute the full posterior, given

the likelihood $L(\theta|x)=p(x|\theta)=\prod\limits_{i=1}^{n}\dfrac{1}{\Gamma(1)\theta}e^{-x_i/\theta}=\dfrac{1}{\theta^n}e^{-n\sum\limits_{i=1}^{n}x_i/{\theta}}=\dfrac{e^{-n^2\bar{x}/\theta}}{\theta^n}$, where $\bar{x}=\dfrac{\sum\limits_{i=1}^{n}x_i}{n}$

The Fisher Information $I(\theta)=-E[l^{\prime\prime}(\theta)]=-E\left[-2\dfrac{n\sum\limits_{i=1}^{n}{x_i}}{\theta^3}+\dfrac{n}{\theta^2}\right]=\dfrac{n(2n-1)}{\theta^2}$, with $E[X_i]=\theta$,

where we have the loglikelihood $l=-\dfrac{n\sum\limits_{i=1}^{n}{x_i}}{\theta}-n.ln(\theta)$

Hence. the prior $p(\theta)=\dfrac{\sqrt{n(2n-1)}}{\theta}$ (Jeffrey's prior: the square root of the Fisher information $I(\theta)$)

$\therefore p(x)$

$=\int\limits_{0}^{\infty}p(x|\theta)p(\theta)d\theta=\sqrt{n(2n-1)}\int\limits_{0}^{\infty}\dfrac{e^{-n^2\bar{x}/\theta}}{\theta^{n+1}}d\theta$

$=\dfrac{\sqrt{n(2n-1)}}{(n^2\bar{x})^n}\int\limits_{0}^{\infty}e^{-y}y^{n-1}dy$, with substitution $y=\dfrac{n^2\bar{x}}{\theta}$

$\implies p(x)=\dfrac{\Gamma(n)\sqrt{n(2n-1)}}{(n^2\bar{x})^n}$

By Bayes theorem, the posterior $p(\theta|x)=\dfrac{p(x|\theta)p(\theta)}{p(x)}=\dfrac{(n^2\bar{x})^ne^{-n^2\bar{x}/\theta}}{\Gamma(n)\theta^{n+1}}$

At minimum, we have $\dfrac{\partial}{\partial \hat{\theta}}\int\limits_{0}^{\infty}(\theta-\hat{\theta})^2p(\theta|x)d\theta=\int\limits_{0}^{\infty}-2(\theta-\hat{\theta})p(\theta|x)d\theta=0$

$\implies \hat{\theta}_{SEL}=\int\limits_{0}^{\infty}\theta p(\theta|x)d\theta=\int\limits_{0}^{\infty}\dfrac{(n^2\bar{x}/\theta)^ne^{-n^2\bar{x}/\theta}}{\Gamma(n)}d\theta=\dfrac{n^2\bar{x}}{\Gamma(n)}\int\limits_{0}^{\infty}e^{-y}y^{n-2}dy=n^2\bar{x}\dfrac{\Gamma(n-1)}{\Gamma(n)}$

$\implies \hat{\theta}_{SEL}=\dfrac{n^2\bar{x}}{n-1}$, since $\Gamma(n)=(n-1)\Gamma(n-1)$

Please double-check the above calculation, in case I have done a calculation mistake somehwhere (I have a feeling that I may have an extra $n$ in the Bayesian estimator) please point it out.

We can follow the exactly same steps to compute the Bayesian estimator for (2), with a different loss function $loss(\hat{\theta},\theta)=(\hat{\theta}/\theta-1)^2$.