Bayesian – Generalized Bayesian Estimator Rule for Point Estimation

bayesiandecision-theoryestimationmathematical-statisticspoint-estimation

Question:

Let $X_1, · · · , X_n$ be a random sample from $Poisson(θ)$. The prior for θ is $G(α, β)$

  1. Find the Bayesian estimator (rule) of θ under the SEL(squared error loss).

  2. Find the generalized Bayesian estimator (rule) of θ under the loss $L(θ, a) = (a−θ)^2/θ$.

Solution:

My understanding:

Assume the prior distribution $θ ∼ G(α, β)$ and suppose that we observe a sample of n Poisson data. Then, we can derive the posterior distribution via Bayes theorem as:

$θ|x ∼ G(α + n \bar{x}, β + n)$

We have squared loss, $L(\theta-a)=(\theta-a)^2$. Then the Bayes rule will be
$E[\theta|x]=?$

Then I couldn't go further from here. I wanted to make sense of the theory from Wikipedia. But it didn't help. I appreciate your suggestions!

Best Answer

Part 1

You want to minimize the posterior expectation of the loss.

For the squared loss, you want to minimize $E\left[(\hat\theta - \theta_{posterior})^2 \right]$.

  • This is like the 2nd moment of the distribution of $\theta_{posterior}$ about the point $\hat \theta$ (the estimate).
  • It is equal to the variance of the posterior gamma distribution plus the square of the distance between the estimate $\hat\theta$ and the mean of the posterior gamma distribution. $$E\left[(\hat\theta - \theta)^2 \right] = \text{Var}(\theta) + (\hat\theta - E\left[\theta_{posterior}\right])^2$$
  • This is minimized when $\hat\theta = E\left[\theta_{posterior}\right]$
  • The mean of the posterior gamma distribution is described by $$E[\theta_{posterior}] = \frac{\alpha_{posterior}}{\beta_{posterior}}$$ See https://en.wikipedia.org/wiki/Gamma_distribution#Properties
  • We can fill in $\alpha_{posterior} = \alpha + n\bar{x}$ and $\beta_{posterior} = \beta + n$
  • So the estimator that minimises the posterior expectation of the loss is $$\hat{\theta} = \frac{\alpha + n\bar{x}}{\beta + n}$$

Part 2

For the generalized loss you want to minimize $E\left[(\hat\theta - \theta)^2 / \theta \right]$. This you can compute by considering the integration formula for expectations with gamma functions

$$\begin{array}{} E\left[ (a-x)^2/x \right] &=& \int_0^\infty (a-x)^2/x \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x} dx \\ &=& \int_0^\infty (a-x)^2 \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-2}e^{-\beta x} dx \\ &=& \beta \frac{\Gamma(\alpha-1)}{\Gamma(\alpha)} \int_0^\infty (a-x)^2 \frac{\beta^{\alpha-1}}{\Gamma(\alpha-1)}x^{\alpha-2}e^{-\beta x} dx \\ \end{array}$$

and so you can compute the expectation $E\left[ (a-x)^2/x \right]$ for a gamma function as the expectation $E\left[ (a-x)^2 \right]$ for another gamma function where the parameter $\alpha$ is one less (and multiply by a constant $\beta \frac{\Gamma(\alpha-1)}{\Gamma(\alpha)} = \beta/(\alpha-1)$). So like above for the squared loss, the estimate is again the same as the mean of the posterior gamma distribution, but with $\alpha$ one less (this happens to be the maximum/mode of the posterior gamma distribution).

Related Question