Solved – UMP for $U(0,\theta)$ (simple x simple hypothesis)

hypothesis testingneyman-pearson-lemmaself-study

Let $X_1,…,X_n $ be iid $U(0,\theta)$. Find the UMP to test $H_0: \theta = \theta_0$ versus $H_1: \theta=\theta_1$, for $\theta_1 < \theta_0.$ Obtain the power of the test.

My attempt:

We know that $X_{(n)}$ is sufficient for $\theta$ and its distribution is
$$f(x;\theta)=\frac{nx^{n-1}}{\theta^n}I_{(0, \theta)}(x)$$

So, by the Neyman-Person lemma we must have a critical region with the form

$$\{ x; \frac{I_{(0,\theta_0)}(x)}{I_{(0,\theta_1)}(x)}\frac{\theta_1^n}{\theta_0^n} \leq c \}$$

For $0<c<1$

But I can't write it in a better form. What should I do now?

Thanks in advance!

Best Answer

The likelihood-ratio (LR) test is not terribly useful in this situation. Your test can be simplified from your specified critical region by looking at possible regions in which the maximum value can fall. From the ordering in your critical region, it is clear that the p-value function for your test is:

$$p(\boldsymbol{x}) = \begin{cases} \text{undefined} & & & \text{for } \theta_0 < x_{(n)}, \\ 1 & & & \text{for } \theta_1 < x_{(n)} \leqslant \theta_0, \\ (\theta_1 / \theta_0)^n & & & \text{for } 0 \leqslant x_{(n)} \leqslant \theta_1. \\ \end{cases}$$

(In the case where $\theta_0 < x_{(n)}$ both hypotheses are falsified by the data, and your LR statistic is undefined, leading to an undefined p-value.)

We can see that, for any significance level $\alpha < (\theta_1 / \theta_0)^n$ the likelihood-ratio test accepts the null hypothesis under all possible observed outcomes (and is trivially UMP). For any significance level $\alpha > (\theta_1 / \theta_0)^n$, the test rejects the null if and only if $x_{(n)} \leqslant \theta_1$ (and it is again trivially UMP).

The problem with the LR test in this situation is that the LR is either zero or one, and does not have any gradations inside the range $0 \leqslant x_{(n)} \leqslant \theta_1$. This leads to a test with a binary p-value.


A better test to apply here (which does not satisfy the considitions of the Neyman-Pearson lemma, but is also UMP) is to impose an additional evidentiary ordering within the range $0 \leqslant x_{(n)} \leqslant \theta_1$ so that smaller values of $x_{(n)}$ are considered to be greater evidence for the alternative hypothesis. If we add this additional ordering we obtain the smoother p-value function:

$$p(\boldsymbol{x}) = \begin{cases} \text{undefined} & & & \text{for } \theta_0 < x_{(n)}, \\ 1 & & & \text{for } \theta_1 < x_{(n)} \leqslant \theta_0, \\ (x_{(n)} / \theta_0)^n & & & \text{for } 0 \leqslant x_{(n)} \leqslant \theta_1. \\ \end{cases}$$

This latter test has the benefit of avoiding a binary p=value, while maintaining the UMP condition (again trivially). Intuitively, it involves the specification of a lower observed maximum value being more conducive to a lower uppoer bound in the sampling distribution.