Solved – Asymtotic distribution of the MLE of a Uniform

extreme valuemaximum likelihooduniform distribution

  • A property of the Maximum Likelihood Estimator is, that it
    asymptotically follows a normal distribution if the solution is unique.
  • In case of a continuous Uniform distribution the Maximum Likelihood Estimator for the upperbound is given through the maximum of the sample $X_i$.

I have a hard time figuring out how the distribution of the maximum converges in distribution to a Gaussian.

In the following Question it is claimed that the maximum of the sample $X_i$ of a $U[0,\theta]$ , with $\theta$ = 1, will follow a Beta distribution.
Question about asymptotic distribution of the maximum

I also tried to figure it out empirically and always came to a more or less the result in the Graph bellow. Also from a logical point of view (atleast my logic) the distribution should never be able to converge to a Gaussian since the Expected Value of $\hat\theta$ is asymptotically equal to $\theta$ and because all possible $X_i$ have to be smaller than $\theta$, therefore there can not exist Values on the right side of $E[\hat\theta]$, which makes it impossible to converge to a Normal Distribution.

Where do I make my mistake? I haven't found a similiar question considering the contradiction.

enter image description here
enter image description here

Best Answer

A property of the Maximum Likelihood Estimator is, that it asymptotically follows a normal distribution if the solution is unique.

Not necessarily. So far as I am aware, all the theorems establishing the asymptotic normality of the MLE require the satisfaction of some "regularity conditions" in addition to uniqueness. Roughly speaking, these regularity conditions require that the MLE was obtained as a stationary point of the likelihood function (not at a boundary point), and that the derivatives of the likelihood function at this point exist up to a sufficiently large order that you can take a reasonable Taylor approximation to it. (The proofs of asymptotic normality then use the Taylor expansion and show that the higher order terms vanish asymptotically.)

The notes you have shown in your question gloss over this requirement, so I imagine that your teacher is interested in giving you the properties for the general case, without dealing with tricky cases where the "regularity conditions" do not hold. However, if you have a look at textbooks that actually prove the asymptotic normality of the MLE, you will see that the proof always hinges on these regularity conditions. (And indeed, good textbooks will usually supply counter-examples that show that asymptotic normality does not hold for some examples that don't obey the regularity conditions; e.g., the MLE of the uniform distribution.)

In the case of the MLE of the uniform distribution, the MLE occurs at a "boundary point" of the likelihood function, so the "regularity conditions" required for theorems asserting asymptotic normality do not hold. So far as I am aware, the MLE does not converge in distribution to the normal in this case.

Related Question