Derive the posterior mode

bayesianmaxima-minimastatistics

Consider random variable Y with a Poisson distribution:
$$P(y|\theta) = \frac{\theta^y e^{-\theta}}{y!}, y=0,1,2,\ldots, \theta>0$$
Mean and variance of Y given $\theta$ are both equal to $\theta$. Assume that $\sum_{i=1}^n y_i >1$.

If we impose the prior $p \propto \frac{1}{\theta}$, then what is the Bayesian posterior mode?

I was able to calculate the likelihood and the posterior, but I'm having trouble calculating the mode so I'm wondering if I got the right posterior: :

$$P(\theta|y) = likelihood * prior$$

$$P(\theta|y) \propto (\theta^{\sum_{i=1}^n y_i}e^{-n\theta})(\theta^{-1})$$

$$ P(\theta|y) \propto \theta^{(\sum_{i=1}^n y_i)-1}e^{-n\theta}$$

Best Answer

The posterior mode is just the maximizing value of the posterior, so this is essentially just a calculus problem. You have already correctly derived the posterior kernel:

$$\pi(\theta|\mathbb{y}) \propto \exp \Big( (n \bar{y} - 1) \ln \theta - n \theta \Big).$$

So the log-posterior can be written as:

$$F_\mathbb{y}(\theta) \equiv \ln \pi(\theta|\mathbb{y}) = (n \bar{y} - 1) \ln \theta - n \theta + \text{const}.$$

We can maximise this via ordinary calculus techniques. Differentiating with respect to $\theta$ gives:

$$\frac{d F_\mathbb{y}}{d \theta}(\theta) = \frac{n \bar{y} - 1}{\theta} - n \quad \quad \quad \quad \quad \frac{d^2 F_\mathbb{y}}{d \theta^2}(\theta) = - \frac{n \bar{y} - 1}{\theta^2}.$$

You are told that $n \bar{y} > 1$ so the second derivative of the objective function is negative. This means that the objective function is strictly concave, so the maximizing value occurs at the unique critical point:

$$0 = \frac{d F_\mathbb{y}}{d \theta}(\hat{\theta}) = \frac{n \bar{y} - 1}{\hat{\theta}} - n \quad \quad \quad \implies \quad \quad \quad \hat{\theta} = \bar{y} - \frac{1}{n}.$$

So in this case we have $\text{mode } \pi(\theta|\mathbb{y}) = \bar{y} - 1/n$ (which is strictly positive since $\bar{y} > 1/n$).