[Math] Uniform density question

statistics

If $X_1,X_2…X_n$ are i.i.d. $Uniform(0, \theta)$, $x>0$ and $x \le \theta$, and $\theta>0$, Calculate $t(\theta) = E(X)$ and find the Best Unbiased Estimator.

a. Find $t(\theta) = E(X|\theta)$ and find the best unbiased estimator

Attempt: $E(X|\theta) = \theta/2$. A sufficient statistic for $\theta$ is $ max(X_1,….,X_n),$ so Is there a more formal way to prove this?

b. Does the Cramer Rao Bound hold here?

Attempt: The Cramer Rao Bound is dependent on the likelihood function of x given $\theta$; and $x \le \theta$ so it doesn't apply. Is there a more formal mathematical reason for this?

Best Answer

I will resist the urge to paste a link to lmgtfy.com, but googling for Cramer Rao uniform distribution will get you many solutions to this problem, as it is nearly the canonical example of the failure of the Cramer Rao bound.

For part a)

If you do not know how to formally compute the expectation of a uniform distribution, spend the weekend reviewing basic probability.

The Lehman Scheffé theorem says that the BUE is a function of the sufficient statistic. I'll let you verify that the statistic is complete.

For part b)

The Cramer Rao lower bound can hold only when you can switch the order of differentiation and integration (among other regularity conditions) $$\frac{\partial}{\partial\theta}\int T(x)f(x;\theta)dx = \int T(x)\frac{\partial}{\partial\theta}f(x;\theta)dx$$ verify that this does not work and you will verify that the Cramer Rao bound does not hold.

Added as I have heavy duty procrastination syndrome

An unbiased estimator of $\theta$ is an estimator (a function $T(x_1,\dots,s_x)$) that is unbiased ($E(T(x_1,\dots,x_n)|\theta) = \theta$). In the case at hand let us let $T$ be a function of the sufficient statistic $\max(x_1,\dots,x_n)$ (it turns out that this will be the best unbiased estimator by the Lehman-Scheffé theorem, but let us not worry about that now.)

Let $Y=\max(X_1,\dots,X_n)$ be our sufficient statistic. We know that the density of $Y$, $f_Y(y)= ny^{n-1}/\theta^n$, $0<y<\theta$ from our study of order statistics. We can find that $$E(Y|\theta)=\int_0^\theta y\frac{ny^{n-1}}{\theta^n}dy=\frac{n}{n+1}.$$ So $\frac{n+1}{n}Y=\frac{n+1}{n}\max(X_1,\dots,X_n)$ is an unbiased estimator of $\theta$. Let us compute the variance of our estimator: $$\text{var}\left(\frac{n+1}{n}Y\right)=\left(\frac{n+1}{n}\right)^2\left[E(Y^2)-\left(\frac{n}{n+1}\theta\right)^2\right]$$ which equals $\frac{1}{n(n+2)}\theta^2$.

But the Fisher information of the uniform distribution is $$E\left[\left(\frac{\partial}{\partial\theta}\log(1/\theta)\right)^2\right]=\frac{1}{\theta^2}.$$ So the Cramer-Rao theorem says that the variance of any unbiased estimator must be greater than $\theta^2/n$. But we have an estimator that has variance $\frac{\theta^2}{n(n+2)}<\theta^2/n$. This is because we cannot switch integration and partial differentiation (essentially because $\theta$ appears in the integral bounds, but I'll let you work that out.)