Maximum likelihood estimator with indicator

maximum likelihoodprobabilityprobability theorystatistics

Let $X_1,…,X_n$ be the sample from distribution with density
$$
p_{\alpha,\beta}(x) = \frac{1}{\alpha}e^{(\beta−x)/\alpha}I_{[\beta,+\infty)}(x).
$$

where $θ = (\alpha,\beta)$ is a two-dimensional parameter. I need to find the maximum likelihood estimate for $θ$.
The problem is that after finding $f_{\theta}(x_1,..,x_n) = \frac{1}{\alpha^n}e^{\sum_{i=1}^{n}(\beta−x_i)/\alpha}I\left\{\min(X_1, .., X_n) \geq \beta\right\}$ and taking logarithm from this function, I'm having a hard time finding derivatives with respect to the parameters, since there is an indicator here.
I think since $f_{\theta}$ is decreasing when $\min(X_1, .., X_n)$ becomes bigger than $\beta$ and for $\min(X_1, .., X_n) < \beta$, $f_{\theta} = 0$, so maximum likelihood estimation for $\beta$ equal $\min(X_1, .., X_n)$, then it's easy to find, that maximum likelihood estimation for $\alpha$ equal $\frac{1}{n}\sum_{i=1}^{n}X_i – \min(X_1, .., X_n)$. Is it right?
And there is a question how to check, that estimation for $\alpha$ is asymptotically normal? I would be grateful for a hint.

Best Answer

Derivation of the maximum likelihood estimator is discussed here in detail.

Let $X_{(1)}:=\min\{X_1,\ldots,X_n\}$ be the first order statistic.

Note that the likelihood is increasing in $\beta$ as long as $X_{(1)}\ge \beta$, which gives you the MLE $$\hat\beta=X_{(1)}$$

Then differentiating the likelihood with respect to $\alpha$ does give you the MLE

$$\hat\alpha=\frac1n\sum_{i=1}^n X_i-X_{(1)}=\overline X_n-X_{(1)}$$

As for a hint on checking asymptotic normality of $\hat\alpha$, first observe that $X_i$'s are shifted exponentials:

$$X_i-\beta \sim \text{Exp}(\text{mean}=\alpha)$$

So the population mean is

$$E(X_i)=\alpha+\beta$$

Now,

\begin{align} \sqrt n(\hat\alpha-\alpha)&=\sqrt n(\overline X_n-X_{(1)}-\alpha-\beta+\beta) \\&=\sqrt n(\overline X_n-(\alpha+\beta))-\sqrt n(X_{(1)}-\beta) \end{align}

What can you say about the convergence of the second term in particular?

Indeed, a straightforward calculation of the distribution function shows that $\sqrt n(X_{(1)}-\beta)$ converges in distribution to $0$. Hence it also converges in probability to $0$:

$$\sqrt n(X_{(1)}-\beta) \stackrel{P}\longrightarrow 0$$

Applying Slutsky's theorem then gives you the answer.

The theorem you are alluding to regarding asymptotic normality of sample quantiles is true for central order statistics. One version of this is

Suppose $X_1,\ldots,X_n$ are i.i.d with cdf $F$. Let $\sqrt n\left(\frac{k_n}{n}-p\right)\to 0$ for some $0<p<1$ as $n\to \infty$. If $\xi_p$ is the $p$th population quantile and if $F'(\xi_p)$ exists with $F'(\xi_p)>0$, then $$\sqrt n(X_{(k_n)}-\xi_p) \stackrel{d}\longrightarrow N\left(0,\frac{p(1-p)}{(F'(\xi_p))^2}\right)$$

As you can see, the above does not apply to sample extremes like $X_{(1)}$ and $X_{(n)}$. The extremes have separate asymptotics. In this problem, you can check that $n(X_{(1)}-\beta)\sim Z$ where $Z$ is Exponential with mean $\alpha$. So the following gives a non-degenerate limiting distribution of $X_{(1)}$:

$$n(X_{(1)}-\beta)\stackrel{d}\longrightarrow Z$$

Notably, the limit is non-normal and the norming constant is $n$ instead of $\sqrt n$.

Related Question