[Math] If $X_i\sim U(0,\theta)$, then there exists UMP test for $H_0:\theta=\theta_0$ vs $H_1:\theta>\theta_0$

hypothesis testingprobability distributionsstatistical-inferencestatisticsuniform distribution

Let $X_{1},\dots,X_{n} \sim Unif(0,\theta)$ independently. Show there exists a UMP size $\alpha$ test for testing $H_{0}:\theta=\theta_{0}$ vs $H_{1}:\theta>\theta_{0}$.

My attempt:

I attempted to show that the distribution is of increasing monotone likelihood ratio by computing $$\frac{f(X_{1},\dots, X_{n}\mid\theta_{1})}{f(X_{1}\dots X_{n}\mid\theta_{2})}$$

This computation gave me: $$\frac{f(X_{1},\dots, X_{n}\mid\theta_{1})}{f(X_{1},\dots ,X_{n}\mid\theta_{2})}=\frac{\theta_{2}^{n}I(X_{(n)}<\theta_{1})}{\theta_{1}^{n}I(X_{(n)}<\theta_{2})}$$ This already confuses me because I'm not familiar with what happens when dividing indicator functions. Moreover, this does not seem to be a non-decreasing function of our test-statistic $t(x)=X_{(n)}$, which, according to my notes, means we cannot apply the Karlin-Rubin theorem for finding a UMP test. Since I get stuck here I cannot prove that there exists a UMP size $\alpha$ test and I can also not find its form.

Question: What is going wrong in my approach above and how can I solve this question?

Thanks!

Best Answer

Just to elaborate on the division of indicator functions:

To test $H_0:\theta=\theta_0$ against $H_1:\theta=\theta_1(>\theta_0)$ according to Neyman-Pearson lemma, note that the likelihood ratio $\lambda$ is of the form

\begin{align} \lambda(x_1\ldots,x_n)&=\frac{f_{H_1}(x_1,\ldots,x_n)}{f_{H_0}(x_1,\ldots,x_n)} \\\\&=\left(\frac{\theta_0}{\theta_1}\right)^n\frac{\mathbf1_{{x_{(n)}<\theta_1}}}{\mathbf1_{{x_{(n)}<\theta_0}}} \\\\&=\begin{cases}\left(\frac{\theta_0}{\theta_1}\right)^n&,\text{ if }0<x_{(n)}<\theta_0\\\,\,\, \infty&,\text{ if }\theta_0< x_{(n)}<\theta_1\end{cases} \end{align}

So I think it follows from here that $\lambda$ is a monotone non-decreasing function of $x_{(n)}$.

And by N-P lemma we know that an MP test of size $\alpha$ is given by $$\varphi(x_1,\ldots,x_n)=\mathbf1_{\lambda(x_1,\ldots,x_n)>k}$$, where $k$ is so chosen that $$E_{H_0}\varphi(X_1,\ldots,X_n)=\alpha$$

Now you can proceed with your proof.