Posterior distribution of $\theta$ with prior Uniform $(0, B)$ and density $p(x;\theta) = e^{-(x-\theta)}$

bayesianintegrationprobability distributionsstatisticsuniform distribution

I am trying to solve this problem from a past exam paper:

Let $X_1, \dots, X_n$ be a random sample of size $n$ from the density:

$p(x; \theta) = exp \{ -(x-\theta) \}, \qquad x > \theta$

In a Bayesian framework, assume that the prior distribution for
$\theta$, $\pi(\theta)$, is a Uniform distribution on the interval
$(0, B)$, where $B$ is a known positive constant. Determine the
posterior distribution of $\theta$, clearly stating the normalising
constant.

I know that posterior $\propto$ prior $\times$ likelihood.

Prior: $$\pi(\theta) = \dfrac{1}{B}$$

Likelihood: $$p(\underline{x} ; \theta) = \prod_{i=i}^{n} exp \{ – (x_i – \theta) \} = exp \left\{ – \left(\sum_{i=1}^{n} x_i – n\theta\right) \right\} $$

EDIT: Changed equality to proportionality in response to the comment below.

Posterior: $$ \pi(\theta \mid \underline{x}) \propto \dfrac{1}{B} \times exp \left\{ – \left(\sum_{i=1}^{n} x_i – n\theta\right) \right\} $$

Now I am stuck. I know the posterior distribution needs to integrate to 1, and this is the way I can find the normalising constant. However, I am unsure about how to find the limits of the integration and how to actually compute this integration (I assume the integration is with respect to $\theta$?)

Best Answer

Minor comment: posterior is proportional to the product of prior and likelihood, not equal.


The integration is indeed with respect to $\theta$. To find the limits refer to the definition of p.d.f., namely it's non-zero when $\theta < x$. Hence, $$ \pi(\theta \mid \underline{x}) = \frac 1 {\mathcal{Z}} \cdot \dfrac{1}{B} \times \exp \left\{ - \left(\sum_{i=1}^{n} x_i - n\theta\right) \right\} $$ and we need to find the constant $\mathcal{Z}$. Integrating one gets that $$ \int \pi(\theta \mid \underline{x}) ~d\theta = \frac 1 {\mathcal{Z}} \cdot \frac 1 {nB} e^{-\sum_{i=1}^n x_i} \int_{-\infty}^{\max(x_i)} e^{n\theta} ~ d(n\theta) = \frac 1{nB} {e^{n\max(x_i) - \sum_{i=1}^n x_i}} \cdot \frac 1 {\mathcal{Z}} = 1 \implies \\ \mathcal{Z} = Bn \cdot e^{- n\max(x_i) + \sum_{i=1}^n x_i}. $$ Finally, the posterior distribution can be written as follows $$ \pi(\theta \mid \underline{x}) = \frac 1 {B^2n} \cdot \exp\left\{n[\max(x_i) + \theta] -2 \sum_{i=1}^n x_i\right\}. $$