I have the prior density function:
$$e^{-\theta} \text{ for } \theta > 0$$
and the likelihood function: $e^{\theta -x}$ for $x \geq \theta$
I have gotten the following in my attempt to derive the posterior distribution:
$$L(\theta) = \prod e^{\theta – x_i} = e^{n\theta}e^{-\sum x_i} \mathbb{I}_{\min X_i}$$
$$\pi(\theta \mid x) = \frac{ e^{-\sum x_i} e^{\theta(n-1)} \mathbb{I}_{\min X_i}}{e^{-\sum x_i}\int_0^{\min(x)}e^{n\theta}e^{-\theta} \, d\theta}$$
$$= \frac{e^{\theta(n-1)}\mathbb{I_{\min X_i}}}{\frac{1}{n-1}e^{\min(x)(n-1)} – \frac{1}{n-1}}$$
Is this correct? Does this further simplify, or is the posterior simply non-standard?
Best Answer
First, this is not the likelihood but the Model Density.
You are almost there but I suggest you to approach Bayesian issue in a different way: do not worry about the posterior denominator... it is wasted time!
$$\pi(\theta)=e^{-\theta}$$
$\theta>0$
$$p(\mathbf{x}\mid\theta) = e^{-\sum_i x_i}\cdot e^{n \theta} \cdot \mathbb{1}_{[\theta;+\infty)}(x_{(1)})$$
$$\pi(\theta|\mathbf{x})=C\times p(\mathbf{x}\mid\theta)\times \pi(\theta)=C \times e^{\theta(n-1)}\cdot\mathbb{1}_{(0;x_{(1)}]}(\theta)$$
Where
$$C^{-1}=\int_0^{x_{(1)}}e^{\theta(n-1)}d \theta=\frac{e^{x_{(1)}(n-1)}-1}{n-1}$$
Observe that: When you deal with the posterior $x_1,x_2,...,x_n$ are only data, not anymore rv's
A real world example using this Statistical Model
$$\pi(\theta|\mathbf{x})=\frac{4 e^{4\theta}}{e^{920}-1}\mathbb{1}_{(0;230]}(\theta)$$
You can easy check that $\pi(\theta|\mathbf{x})$ is a nice density function. for the Hypothesis Testing, I leave it as an exercise, if this is part of your program...