Solved – Neyman-Pearson lemma

hypothesis testinginferencelikelihood-ratioreferencesself-study

I have read the Neyman–Pearson lemma from the book Introduction to the Theory of Statistics by Mood, Graybill and Boes. But I have not understood the lemma.

Can anyone please explain the lemma to me in plain words? What does it state?

Neyman-Pearson Lemma : Let $X_1,\ldots,X_n$ be a random sample from $f(x;\theta)$, where $\theta$ is one of two known values $\theta_0$ and $\theta_1$, and let $0<\alpha<1$ be fixed.

Let $k^*$ be a positive constant and $C^*$ be a subset of $\mathscr X$ which satisfy :$$ \tag 1 P_{\theta_0}[(X_1,\ldots,X_n)\in C^*] = \alpha$$ $$\tag 2 \lambda=\frac{L(\theta_0;x_1,\ldots,x_n)}{L(\theta_1;x_1,\ldots,x_n)} = \frac{L_0}{L_1} \le k^*\quad \text{if } (x_1,\ldots,x_n)\in C^* $$ $$\text{and}\quad \lambda\ge\quad k^* \text{ if } (x_1,\ldots,x_n)\in \bar C^* $$
Then the test $\gamma^*$ corresponding to the critical region $C^*$ is a most powerful test of size $\alpha$ of $\mathscr H_0:\theta=\theta_0$ versus $\mathscr H_1:\theta=\theta_1$

Expressed in words, I have understood that the two criteria specify

(1) P[rejecting null hypothesis | null hypothesis is true] =significance level

(2) reject null hypothesis when the likelihood ratio, $\lambda\le$ some positive constant $ k^*$ if $(x_1,\ldots,x_n)$ fall in the critical region

Then the test is the most powerful test of a simple hypothesis.

  • Why is it only for simple hypotheses? Can't it be for composite hypothesis? Is my explanation in words correct?

Best Answer

I think you understood the lemma well.

Why it does not work for a composite alternative? As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If the alternative is composite, which parameter are you going to plug in?