Derive asymptotic distribution of the ML estimator

asymptoticsmaximum likelihoodparameter estimationstatistics

Let $x$ be a random variable with probability density (pdf)
$$f(x)= (\theta +1)x^\theta $$

where $\theta >-1$.

The expressions for its mean and variance are

$$E(X)= \frac{\theta + 1}{\theta +2 }$$

and

$$ Var(X) = \frac{\theta +1 }{(\theta +2)^2(\theta+3)}$$

Derive the asymptotic distribution of the ML estimator $\hat{\theta}$ for $\theta$.

Solution:

The likelihood function $l(\theta) = n log (\theta +1)+ \theta \sum log(x_i)$
then the first derivative is:
$$ l'(\theta) = \frac{n}{\theta +1} + \sum log(x_i)$$
setting $l'(\theta) = 0 $ and solving for $\theta$ yields:
$$ \hat{\theta}_{MLE}= – \frac{n}{\sum log(x)}-1$$
then $\hat{\theta}=\frac{1}{\bar{Y}}-1$ where $Y_i = – log X_i$.

To derive the asymptotic distribution we rely on:
$$\hat{\theta}_{MLE} \tilde{} AN \Big(\theta, \frac{1}{nI(\theta)}\Big) $$

where $I(\theta)=E[-D^2_\theta log f(X;\theta)]=E[\frac{1}{(\theta+1)^2}]=\frac{1}{(\theta+1)^2}$.

Exact distribution:

$$ Y \tilde{} Exp(\theta+1),\space thus \space
> 2(\theta+1)Y\tilde{}Exp(\frac12)\tilde{} \chi^2_2$$

I do not understand this. How do we know that we deal with $exp(\theta+1)$ ?

what is the reason to bring in the $2(\theta+1)?$

Therefore

$$ 2n(\theta + 1) \bar{Y} = \sum[2(\theta+1)Y_i]\tilde{} \chi^2_2$$.

Exact distribution ca be given in transformation form

$$ T = 2n \frac{\theta +1}{\theta+1}\tilde{}\chi_2^2$$

I do not understand this last part either. A detailed breakdown would be of great help.

Best Answer

The relationship between $X_i$ and $Y_i = -\log X_i$ is understood by transformation: $$f_{Y_i}(y) = f_{X_i}(e^{-y}) \left|\frac{d}{dy} \left[e^{-y}\right]\right| = (\theta+1) (e^{-y})^\theta e^{-y} = (\theta+1) e^{-(\theta+1)y}, \quad y > 0,$$ thus $$Y_i \sim \operatorname{Exponential}(\theta+1),$$ where $\theta+1$ is a rate parameter; and to create a pivotal quantity, it is easy to see that the distribution of $2(\theta+1) Y$ is exponential with rate $1/2$; that is to say, chi-square with $2$ degrees of freedom.