Finding the distribution of the test statistic under the null hypothesis $H_0$

hypothesis testinglog likelihoodmaximum likelihoodstatistics

Suppose we have a random sample $X_1, \dots, X_n$ from $N(\mu , \sigma^2 )$, with known mean $\mu$. For the hypotheses $H_0: \sigma^2 = \sigma_0^2$ and $H_A : \sigma^2 = \sigma_1^2$, where $\sigma_1^2 > \sigma_0^2$, derive the Likelihood Ratio Test with significance level $0<\alpha <1$. Find the distribution of the test statistic under $H_0$.

$\textbf{My approach so far}$

I obtained the likelihood ratio

$$R = {\bigg(}\sqrt{\frac{\sigma_0^2 }{\sigma_1^2 }}{\bigg)}^n \text{exp} {\bigg\{} \frac{-1}{2} (\sigma_0^{-2} – \sigma_1^{-2}) \sum_{i=1}^n (X_i – \mu)^2 {\bigg\}} $$

which by the Neyman-Pearson lemma, the most powerful test for this type of hypothesis for the given data will depend only on $\sum_{i=1}^n (x_i – \mu)^2$.

For the next part, when $\sigma_1^2 > \sigma_0^2$, we have that $R$ is a decreasing function of $\sum_{i=1}^n (x_i – \mu)^2$. In which case we should reject $H_0$ when $\sum_{i=1}^n (x_i – \mu)^2$ is sufficiently large. So there is some constant $K$ such that $\sum_{i=1}^n (x_i – \mu )^2 > K$, where the threshold for rejection depends on the size of the test.

Here's wehre I am stuck. To answer the second part, I believe I need to find (please correct me if I am wrong) $K$ where

$$P(R \leq K | H_0 ) = P(\sum_{i=1}^n (x_i – \mu )^2 > K | H_0 ) = \alpha $$

I'm trying to figure out how I can convert the middle term to have the form $\frac{X_i – \mu }{\sigma_0}$ for e.g., so that I can obtain the value of $K$.

Am I on the right track and how can I obtain $K$ from the above equation (if it is indeed correct)?

Best Answer

Suppose $L(\sigma^2\mid x_1,\ldots,x_n)$ is the likelihood function given the sample $(x_1,\ldots,x_n)$.

The unrestricted MLE of $\sigma^2$ is $$\hat\sigma^2=\frac1n \sum\limits_{i=1}^n(x_i-\mu)^2$$

The likelihood ratio statistic is defined as

\begin{align} \Lambda(x_1,\ldots,x_n)&=\frac{\sup_{\sigma^2=\sigma_0^2}L(\sigma^2\mid x_1,\ldots,x_n)}{\sup_{\sigma^2>\sigma_0^2}L(\sigma^2\mid x_1,\ldots,x_n)} \\&=\frac{L(\sigma_0^2\mid x_1,\ldots,x_n)}{L(\tilde\sigma^2\mid x_1,\ldots,x_n)}\,, \end{align}

where $\tilde\sigma^2$ is the restricted MLE of $\sigma^2$ when $\sigma^2>\sigma_0^2$.

It can be argued that

$$\tilde\sigma^2=\begin{cases}\sigma_0^2 &,\text{ if }\hat\sigma^2\le \sigma_0^2 \\ \hat\sigma^2&,\text{ if }\hat\sigma^2> \sigma_0^2\end{cases}$$

Therefore,

$$\Lambda(x_1,\ldots,x_n)=\begin{cases}1 &,\text{ if }\hat\sigma^2\le \sigma_0^2 \\ \frac{L(\sigma_0^2\mid x_1,\ldots,x_n)}{L(\hat\sigma^2\mid x_1,\ldots,x_n)}&,\text{ if }\hat\sigma^2> \sigma_0^2\end{cases}$$

When $\Lambda=1$, we trivially fail to reject $H_0$. For the other case, we reject $H_0$ for small values of $\Lambda$.

So the critical region is of the form $\Lambda<c$ for some $c$ when $\hat\sigma^2>\sigma_0^2$.

You would find that $\Lambda<c \iff \sum\limits_{i=1}^n (x_i-\mu)^2>k$ where $k$ is such that $$P_{H_0}\left(\sum\limits_{i=1}^n(X_i-\mu)^2>k\right)=\alpha$$

Since $\frac{1}{\sigma_0^2}\sum\limits_{i=1}^n(X_i-\mu)^2\sim \chi^2_n$ under $H_0$, we must have $k=\sigma_0^2\cdot\chi^2_{\alpha,n}$ where $\chi^2_{\alpha,n}$ is the $(1-\alpha)$th quantile of a $\chi^2_n$ distribution. So the likelihood ratio test rejects $H_0$ when $\sum\limits_{i=1}^n (X_i-\mu)^2>\sigma_0^2\cdot\chi^2_{\alpha,n}$

This is in fact the same test you would get while searching for a most powerful test using Neyman-Pearson lemma. But if you are asked to derive a likelihood ratio test, you should stick to its own method.

Related Question