Hypothesis Testing – How to Perform the Most Powerful Lower Tail Test for Uniform Distribution

hypothesis testingmathematical-statisticsself-studystatistical-poweruniform distribution

Problem Statement: Let $Y_1, Y_2,\dots,Y_n$ denote a random sample from a uniform
distribution over the interval $(0,\theta).$ Find a most powerful $\alpha$-level test for testing $H_0:\theta=\theta_0$ against $H_a:\theta=\theta_a,$ where $\theta_a<\theta_0.$

First Note: This is Exercise 10.85a in Mathematical Statistics with Applications, 5th Ed., by Wackerly, Mendenhall, and Sheaffer.

Second Note: This question has been asked several times before, notably here (not the same test direction as mine, which is important), here (also reversed direction), and here (same direction as mine). This question uses the LRT, which I have not yet reached in the book. So why clutter up Stats.SE with another question? For several reasons: 1. Because no solution in any of those links has enough steps worked out for my understanding. In particular, expressions such as $\sqrt[n]{\alpha}$ or $\sqrt[n]{1-\alpha}$ appear without much explanation. I have interacted with a number of the solution authors, but my thick skull simply hasn't been able to grasp what's really going on. I know this is a self-study question, but I have spent literally days on it, and I need some step-by-step help with this, all spelled out. 2. I find the $\phi$ notation clutters things up considerably, and I would rather have a solution that does not utilize the $\phi$ notation for a test.

My Work So Far: I use the notation $Y_{(i)}$ for the $i$th order statistic. The fundamental distribution function is
$$f(y|\theta)=
\begin{cases}
\dfrac1\theta,&0<y<\theta\\
0,&\text{elsewhere}.
\end{cases}
$$

Hence, the likelihood function is
$$L(\theta)=
\begin{cases}
\dfrac{1}{\theta^n},&0<Y_{(1)}\le Y_{(n)}<\theta\\
0,&\text{elsewhere}.
\end{cases}$$

Then we have
\begin{align*}
L(\theta_0)&=
\begin{cases}
\dfrac{1}{\theta_0^n},&0<Y_{(1)}\le Y_{(n)}<\theta_0\\
0,&\text{elsewhere},
\end{cases}\\
L(\theta_a)
&=\begin{cases}
\dfrac{1}{\theta_a^n},&0<Y_{(1)}\le Y_{(n)}<\theta_a\\
0,&\text{elsewhere}.
\end{cases}
\end{align*}

We recall the assumption that $\theta_a<\theta_0.$ It follows that if
$L(\theta_a)\not=0,$ then $L(\theta_0)\not=0.$ Hence,
$$\frac{L(\theta_0)}{L(\theta_a)}=
\begin{cases}
\dfrac{\theta_a^n}{\theta_0^n},&0<Y_{(1)}\le Y_{(n)}<\theta_a<\theta_0\\
\text{undefined},&\text{elsewhere.}
\end{cases}
$$

From this information, we discover that the Neyman-Pearson Lemma Inequality
can only be satisfied if $Y_{(n)}<\theta_a.$ That is, we would accept the
alternative hypothesis if $Y_{(n)}<\theta_a,$ and reject it if
$\theta_a<Y_{(n)}.$

My Questions:

  1. Is my reasoning correct so far? If not, why not?
  2. Is this an $\alpha$-level test? If not, why not and how can I adjust it (please spell this out in detail!) so that it is an $\alpha$-level test? What is the general principle being used to convert it to an $\alpha$-level test?
  3. I note that $Y_{(n)}$ is a biased estimator for $\theta,$ while $(n+1)Y_{(n)}/n$ is unbiased. Does this matter at all in this question? Why or why not?
  4. If $\theta_a<Y_{(n)}<\theta_0,$ doesn't $L(\theta_a)=0?$ And so the Neyman-Pearson fraction would be undefined, correct?
  5. If we were to change the problem so that $\theta_a>\theta_0,$ what would change in the solution?

Best Answer

  1. Your reasoning seems right to me.
  2. Rejecting simply if $Y_{(n)}<\theta_a$ is not an $\alpha$-level test. For that we would need to look at the sampling distribution of the sample maximum. Under $H_0$ the sample maximum of $n$ uniform random variables has CDF

$$F_{Y_{(n)}}(y_{(n)})=\Big(\frac{y_{(n)}}{\theta_0}\Big)^n$$

  1. For an $\alpha$-level test we can reject $H_0$ when $y_{(n)}<\theta_a$ and $\Big(\frac{y_{(n)}}{\theta_0}\Big)^n\le \alpha$.

  2. The unbiased estimator is not needed to construct a test, but it is useful to have.

  3. I believe so, but then there is nothing to test since $\theta_a<y_{(n)}$ is irrefutible evidence that $H_a$ is false.

  4. If we change the problem so that $\theta_a>\theta_0$ then we would reject $H_0$ if $y_{(n)}>\theta_0$ since this is irrefutible evidence that $\theta>\theta_0$. If $\theta_a>\theta_0>y_{(n)}$ I'm not sure there is an obvious way to construct an $\alpha$-level test. We might be forced to use the larger value as the null value and smaller value as the alternative value. I haven't yet read your references so perhaps they have a different answer for this.

Let me know if I have made any mistakes.

Related Question