Constructing the Most Powerful Test – Statistics and Hypothesis Testing

hypothesis testingstatistics

Suppose we have a random sample of size $n = 1$ from the probability density function:
$$f(x \mid \theta) = \begin{cases}
1 + \theta^2(0.5-x), & 0<x<1\\
0, & \text{otherwise}
\end{cases}$$

where $-1 \leq \theta \leq 1$.

Derive the most powerful test for $H_0 : \theta = 0$ against $H_A : \theta = \theta_1$ at significance level $\alpha$.

I found the likelihood ratio to be $\frac{1}{1+\theta_1^2(0.5-X)}$, which is small if $0.5-X$ is as negative as possible for $0<X<1$ and $-1 \leq \theta_1 \leq 1$. This means the rejection region is of the form $X > c$ for some undetermined constant $c$ as we want $X$ to be large.

I then do
$$\alpha = P(R \mid H_0) = P(X > c) = 1 – c,$$
so $c = 1 – \alpha$, meaning that the most powerful test is $X > 1 – \alpha$ since it doesn't depend on $\theta_1$.

Is this correct?

Best Answer

Your observation is correct. Here, I'll help you sort out what's confusing you.

  1. Inspired from the Neyman-Pearson lemma, you inferred that the most powerful test is to reject $H_0$ when $\color{blue}{T_A=X}>c_A$ (indeed, the monotonicity of the likelihood ratio statistic in $X$ allows you for this inference). Your calculation based on $\mathbb P (T_A>c_A|H_0)=\alpha$ is correct, and we have $\color{blue}{c_A=1-\alpha}$ considering that $T_A|H_0 \sim \mathcal U (0,1)$. Let us denote this test by $\big (T_A, R_A=(c_A,\infty) \big)$.
  2. If one wants to directly work with the likelihood ratio statistic appearing in the Neyman-Pearson lemma, i.e., $\color{blue}{T_{NP}=\frac{1}{1+\theta_1^2(0.5-X)}}$, then $H_0$ is rejected if $T_{NP}>c_{NP}$. From $\mathbb P (T_{NP}>c_{NP}|H_0)=\alpha$, we obtain $\color{blue}{c_{NP}=\frac{1}{1+\theta_1^2(0.5-(1-\alpha))}}$, which depends on $\theta_1$. Let us denote this test by $\big (T_{NP}, R_{NP}=(c_{NP},\infty) \big)$.

Now a source of confusion is that we have two different tests, and the question is which of the two tests $\big (T_A, R_A \big)$ and $\big (T_{NP}, R_{NP} \big)$ is more powerful. The answer is that these two test are equivalent and both have the same power. Indeed,

$$ T_A \in R_A \color{blue}{\Leftrightarrow} T_{NP} \in R_{NP},$$

which means that $H_0$ is rejected/accepted based on the test $\big (T_A, R_A \big)$ if and only if it is rejected/accepted based on the test $\big (T_{NP}, R_{NP} \big).$


General case:

If the likelihood ratio statistic can be written as $H(T)$ where $T$ is a statistic and $H$ is a strictly monotone function, we can construct a simple test of the form $\big (T, R=(c,\infty) \big)$ when $H$ is increasing and of the form $\big (T, R=(-\infty,c) \big)$ when $H$ is decreasing.

Question: What is role of $\theta_0$ and $\theta_1$? Answer: The values of the parameters $\theta_0$ and $\theta_1$ determine whether the function $H$ is increasing or decreasing, and thus, the form of rejection region $R$ depends on parameters $\theta_0$ and $\theta_1$. For example, if in the OP, we have $\theta_0\neq 0$ and $\theta_1= 0$, then the rejection region becomes of the form $R=(-\infty,c)$, that is, to reject $H_0$ when $X<c$


The method you used to construct a test is more popular since it is very simple to work with the resulting test. However, if the likelihood ratio statistic is not a monotone function of a statistic, we cannot use it, for example, we cannot use it for $X \sim N(\theta,\theta^2)$.

Related Question