[Math] Using loss function to find Bayes estimate

bayesianprobability distributions

I have a 2 part question, the first I believe I have figured out.

The question is:

Let $Y_1, Y_2, \ldots, Y_n$ be a random sample from a gamma pdf with parameters $r$ and $\theta$, where the prior distribution assigned to $\theta$ is the gamma pdf with parameters $s$ and $\mu$. Let $W=Y_1 +Y_2+\cdots+Y_n$. Find the posterior pdf for $\theta$.

Now the second part goes:
Let $X_1, X_2, …, X_5$ be a random sample from a gamma pdf with $r=2$ and $\lambda$ where the prior distribution assigned to $\theta$ is the gamma pdf with parameters $s=2$ and $\mu=20$. Given what I found in the first part of the question, the posterior distribution for $\lambda$ is a gamma with parameters $(5)(2)+2$ and $\sum_{i=1}^{5}x_i +20.$

5 observations are collected whose sum is 25. [Here is my problem] Using the loss function $L(\hat{\lambda},\lambda)= |\hat{\lambda}-\lambda| $ find the Bayes' estimate for $\lambda$. Numerically approximate a solution.

I'm at a stand still for what to do as I haven't made much progress in understanding the loss function on my own. Can anyone offer some expertise/guidance in coming to a solution?

Best Answer

The Bayes estimator $\lambda_B$ satisfies $\lambda_B = \arg \min_{\hat{\lambda}} \mathbb{E}(L(\hat{\lambda}, \lambda))$, that is, $\lambda_B$ is the value of $\hat{\lambda}$ that minimises the expected loss. So $$\lambda_{B} = \arg \min_{_\hat{\lambda}} \int_0^\infty |\hat{\lambda} - \lambda| p(\lambda | x_{1:5}) d \lambda.$$ Therefore $$\lambda_{B} = \arg \min_{_\hat{\lambda}} \int_0^\infty |\hat{\lambda} - \lambda| \frac{1}{\Gamma(12) 45^{12}} \lambda^{12 - 1} e^{-\frac{\lambda}{45}} d\lambda,$$ using your estimates $r = 12, \theta = 25 + 20 = 45.$

To calculate the Bayes error from here we need to use the Fundamental Theorem of Calculus. Letting $\frac{1}{\Gamma(12) 45^{12}} \lambda^{12 - 1} e^{-\frac{\lambda}{45}} = f(\lambda)$, we need to find $\hat{\lambda}$ to minimise $g(\hat{\lambda}) = \int_0^\infty |\hat{\lambda} - \lambda| f(\lambda) d \lambda$. Breaking up the integral, we want to minimise $$g(\hat{\lambda}) = \int_0^{\hat{\lambda}} (\hat{\lambda} - \lambda) f(\lambda) d \lambda + \int_{\hat{\lambda}}^\infty (\lambda - \hat{\lambda}) f(\lambda) d\lambda.$$ Equivalently, $$g(\hat{\lambda}) = \hat{\lambda}\int_0^{\hat{\lambda}} f(\lambda) d \lambda - \int_0^{\hat{\lambda}} \lambda f(\lambda) d \lambda + \int_{\hat{\lambda}}^\infty \lambda f(\lambda) d \lambda - \hat{\lambda}\int_{\hat{\lambda}}^\infty f(\lambda) d \lambda.$$ Now applying FTOC and simplifying, we have $$g^\prime(\hat{\lambda}) = \int_0^{\hat{\lambda}} f(\lambda) d \lambda - \int_{\hat{\lambda}}^\infty f(\lambda) d \lambda = 0$$ at a minimum. Thus the Bayes error $\lambda_B$ satisfies $$\int_0^{\lambda_B} f(\lambda) d\lambda = 0.5.$$ I don't know what techniques are permitted for answering your question, but using qgamma(0.5, 12, 45) in R gives $\lambda_B = 0.259297$.

Related Question