[Math] Posterior distribution of exponential prior and uniform likelihood

bayes-theorembayesianprobabilityprobability distributionsstatistics

I am working on Question 2 of this exam from Duke: First Year Exam – 2017

Let $X\mid\theta\sim \mathcal{U}(0,\theta)$ be a uniform likelihood and $\theta\mid\lambda\sim\text{Exp}(\lambda)$ be a prior on the upper limit of the uniform distribution. My question is, What is the posterior distribution of $\theta\mid X$?

I know Bayes' Theorem says,
\begin{align}
p(\theta\mid X)&\propto p(X\mid\theta)p(\theta) \\
&= \left(\frac{1}{\theta}\right)\left(\lambda\cdot\exp\left(-\lambda \theta\right)\right).
\end{align}
So far, so good. But I am having trouble because now I might try to find the normalizing constant of this posterior distribution:
\begin{align}
C &= \int_0^\infty p\left(\theta\mid X\right) \, d\theta \\
&=\int_0^\infty \left(\frac{1}{\theta}\right)\left(\lambda\cdot\exp\left(-\lambda \theta\right)\right) \, d\theta.
\end{align}
But according to WolframAlpha, this integral does not converge! How can one proceed to characterize this posterior distribution? Judging from the examination, you'd need to find the posterior mean to state what the Bayes estimator is under squared loss, but this expectation is proportional to the normalizing constant that doesn't seem to exist.

Best Answer

Why is your integral going from $0$ to $\infty$?

$\theta$ cannot be less than $X.$ Work with the interval $\theta\in [X,\infty).$