Statistics – Uniformly Most Powerful Test for a Uniform Sample

hypothesis testingprobability distributionsstatistics

Let $X_{1}, \dots, X_{n}$ be a sample from $U(0,\theta), \theta > 0$ (uniform distribution). Show that the test:

$\phi_{1}(x_{1},\dots,x_{n})=\begin{cases} 1 &\mbox{if } \max(x_{1},\dots,x_{n}) > \theta_{0} \quad or \quad \max(x_{1},\dots,x_{n}) \leq \alpha^{1/n}\theta_{0}\\
0 & \mbox{else } \end{cases}$

Is the UMP (uniformly most powerful test) of size $\alpha$ for testing $H_{0}:\theta = \theta_{0}$ against $H_{1}:\theta \neq \theta_{0}$


I know that the statistic $T$ given by $T(x_{1},\dots,x_{n})=\max(x_{1},\dots,x_{n})$ has the property that $U(0,\theta)$ has an MLR (monotone likelihood ratio) property in $T$. Then, by the Karlin-Rubin Theorem we get a test for $H_{0}: \theta \leq
\theta_{0}$ against $H_{1}: \theta > \theta_{0}$, of the form $\phi(x)=1$ if $x > x_{0}$ and $0$ if $x < x_{0}$, for $x_{0}$ chosen such that $E_{\theta_{0}}(\phi(x))=\alpha$

However, the Karlin-Rubin Theorem does not give an UMP of the form as specified in the problem. What would be a way to approach this problem? I'm completely lost.

Thanks for the help!

Best Answer

  • Given $\theta$, the probability that $\max(X_{1},\dots,X_{n}) \le m$ is $\left(\frac{m}{\theta}\right)^n$ when $0 \le m \le \theta$, so the density of the maximum is $n\frac{m^{n-1}}{\theta^n} I_{[0 \le m \le \theta]}$

  • So the likelihood function for $\theta$ given $\max(x_{1},\dots,x_{n})$ is proportional to $L(\theta) = \frac{1}{\theta^n} I_{[ \theta\ge \max(x_{1},\dots,x_{n})] }$, which is constant in the maximum observation apart from the indicator function

  • $L(\theta)=0$ when $\theta\lt \max(x_{1},\dots,x_{n})$, while $L(\theta)$ is a decreasing function of $\theta$ when $\max(x_{1},\dots,x_{n}) \le \theta\lt \infty$, so using the Karlin–Rubin theorem you could reject $H_0$ either when $\theta_0 \lt \max(x_{1},\dots,x_{n})$ or when $\theta_0 \gt m_0$ for some $m_0$ where $\Pr(\max(X_{1},\dots,X_{n}) \le m_0 \mid \theta_0) = \alpha$ i.e. $\left(\frac{m_0}{\theta_0}\right)^n \le \alpha$

  • This makes the rejection regions $\theta_0 \lt \max(x_{1},\dots,x_{n})$ or $\theta_0 \gt \alpha^{-1/n}\max(x_{1},\dots,x_{n})$. If you prefer, you can express these as $\max(x_{1},\dots,x_{n}) \gt \theta_0$ or $\max(x_{1},\dots,x_{n}) \lt \alpha^{1/n} \theta_0$