Calculate the Bayes estimate for the improper prior $\pi (\theta)=1$ under uniform distribution

probabilitystatistical-inferencestatistics

Let $X_1, X_2,…,X_n$ be iid uniform $U(0, \theta), 0<\theta <\infty$.

Assume a quadratic loss function. Calculate the Bayes estimate for the improper prior $\pi (\theta)=1, 0<\theta <\infty$

Verify whether the Bayes estimate is consistent for $\theta$

My attempt:

Given the quadratic loss function, the Bayes estimator is the posterior mean. I first try to get the posterior distribution of $\theta$. I use capital $X$ to denote the n observations.

$\pi (\theta | X) = f(X |\theta) \pi (\theta)= \theta^{-n} 1_{X_{n}< \theta} $ What this distribution is? I know if I just focus on $\theta^{-n}$, then this is Beta(1-n, 1).

The solution is:

enter image description here

Best Answer

To get the posterior distribution of $\theta$, you use Bayes' Theorem: $$\pi(\theta \mid X) = \frac{f(X \mid \theta) \pi(\theta)}{\int_{0}^{\infty} f(X \mid \theta) \pi(\theta) d\theta}.$$ Note that we need the integral in the denominator to make the posterior a proper distribution.

Since we know that $$f(X \mid \theta) = \theta^{-n} I_{(X_{(n)} < \theta)} \textrm{ and } \pi(\theta) = 1$$ we get $$\pi(\theta \mid X) = \frac{\theta^{-n} I_{(X_{(n)} < \theta)}}{\int_{0}^{\infty} \theta^{-n} I_{(X_{(n)} < \theta)} d\theta} = \frac{\theta^{-n} I_{(X_{(n)} < \theta)}}{\int_{X_{(n)}}^{\infty} \theta^{-n} d\theta}.$$

Therefore $$E(\theta \mid X) = \int_{0}^{\infty}\theta \pi(\theta \mid X) d\theta = \frac{\int_{0}^{\infty}\theta \theta^{-n} I_{(X_{(n)} < \theta)}d\theta}{\int_{X_{(n)}}^{\infty} \theta^{-n} d\theta} = \frac{\int_{X_{(n)}}^{\infty}\theta \theta^{-n}d\theta}{\int_{X_{(n)}}^{\infty} \theta^{-n} d\theta}.$$

This gives you the formula that you put a question mark next to in your question.

Then we have $$\int_{X_{(n)}}^{\infty}\theta \theta^{-n}d\theta = \int_{X_{(n)}}^{\infty}\theta^{-n+1}d\theta = \left[\frac{\theta^{-n+2}}{-n+2}\right]_{X_{(n)}}^\infty = \frac{X_{(n)}^{-n+2}}{n-2}$$ and also $$\int_{X_{(n)}}^{\infty} \theta^{-n} d\theta = \left[\frac{\theta^{-n+1}}{-n+1}\right]_{X_{(n)}}^\infty = \frac{X_{(n)}^{-n+1}}{n-1}.$$

So the fraction is $$\frac{n-1}{n-2}X_{(n)},$$ and since $(n-1)/(n-2) \rightarrow 1$ as $n \rightarrow \infty$, and $X_{(n)} \xrightarrow{P} \theta$ we are done.

Proof that $X_{(n)} \xrightarrow{P} \theta$:

To show convergence in probability, we need to show that for any $\epsilon > 0$, $$\lim_{n\rightarrow \infty} P(\mid X_{(n)} - \theta \mid > \epsilon) = 0.$$

The random variable $X_{(n)} = \max(\{X_1, \dots, X_n\})$ is guaranteed to be less than $\theta$, so $\mid X_{(n)} - \theta \mid = \theta - X_{(n)}$. Therefore $$P(\mid X_{(n)} - \theta \mid > \epsilon) = P(\theta - X_{(n)} > \epsilon) = P(\theta - \epsilon > X_{(n)}).$$

But $\theta - \epsilon > X_{(n)}$ if and only if $\theta - \epsilon > X_i$ for every $i$, since $X_{(n)}$ is the maximum. Therefore $$P(\theta - \epsilon > X_{(n)}) = P(\theta - \epsilon > X_1, \theta - \epsilon > X_2, \dots, \theta - \epsilon > X_n).$$ The $X_i$ are independent and identically distributed so $$P(\theta - \epsilon > X_1, \dots, \theta - \epsilon > X_n) = P(\theta - \epsilon > X_1) \times \cdots \times P(\theta - \epsilon > X_n) = P(\theta - \epsilon > X_i)^n.$$ Since $X_i \sim U(0, \theta)$ then as long as $0 < \epsilon < \theta$ we have $$P(\theta - \epsilon > X_i) = \int_{\theta-\epsilon}^\theta \frac{1}{\theta} dx = \frac{\epsilon}{\theta}.$$ Putting this together we have that if $0 < \epsilon < \theta$ then $$P(\mid X_{(n)} - \theta \mid > \epsilon) = \left(\frac{\epsilon}{\theta}\right)^n \rightarrow 0 \textrm{ as } n \rightarrow \infty,$$ and if $\epsilon \geq \theta$ then $P(\mid X_{(n)} - \theta \mid > \epsilon) = 0$ since $0 < X_{(n)} < \theta$.

In either case we have $\lim_{n\rightarrow \infty} P(\mid X_{(n)} - \theta \mid > \epsilon) = 0$.

Related Question