[Math] Proof of Karlin-Rubin’s theorem

pr.probabilityreal-analysisst.statistics

I asked this question on Math Exchange, but as I did not receive a successful answer, maybe you could help me.

Karlin-Rubin's theorem states conditions under which we can find a uniformly most powerful test (UMPT) for a statistical hypothesis:

Suppose a family of density or mass functions $\{f(\vec{x}|\theta):\,\theta\in\Theta\}$ and we want to test $$\begin{cases} H_0:\,\theta\leq\theta_0 \\ H_A:\,\theta>\theta_0.\end{cases}$$If the likelihood ratio is monotone on a statistic $T(\vec{x})$ (that is, for every fixed $\theta_1<\theta_2$ in $\Theta$, the ratio $\frac{f(\vec{x}|\theta_2)}{f(\vec{x}|\theta_1)}$ is nondecreasing on $\{\vec{x}:\,f(\vec{x}|\theta_2)>0\text{ or }f(\vec{x}|\theta_1)>0\}$ as a function of $T(\vec{x})$, interpreting $c/0=\infty$ if $c>0$), then the test of critical region $\text{CR}=\{\vec{x}:\,T(\vec{x})\geq k\}$, where $k$ is chosen so that $\alpha=P(\text{CR}|\theta=\theta_0)$, is the UMPT of size $\alpha$.

In all the proofs I have read (for instance, in page 22 here or in "Statistical inference" by Casella-Berger, 2n edition, page 391), it is (more or less) said: "we can find $k_1$ such that, if $T(\vec{x})\geq k$, then $\frac{f(\vec{x}|\theta_2)}{f(\vec{x}|\theta_1)}\geq k_1$, and if $T(\vec{x})<k$, then $\frac{f(\vec{x}|\theta_2)}{f(\vec{x}|\theta_1)}< k_1$". I would understand that statement if the likehood ratio were strictly increasing, but what about the case in which it is constant?

For example, if $X\sim U(0,\theta)$, the likelihood ratio is monotone on $T(\vec{x})=\max_{1\leq i\leq n}x_i$ ($n$ is the length of the sample $\vec{x}$), but not strictly increasing.

My questions are:

  1. Is the assertion between quotation marks true for every density or mass function with (not strictly) monotone likelihood ratio on $T$?

  2. And what about in the case of the uniform distribution?

Best Answer

$\newcommand\th\theta\newcommand\al\alpha$Even the following more general assertion is true:

Suppose that there is a statistic $T$ such that for all $\th_0$ and $\th_1$ such that $\th_0<\th_1$ there is a nondecreasing function $g_{\th_0,\th_1}$ such that for all $x$ we have $$r_{\th_0,\th_1}(x):=\frac{f_{\th_1}(x)}{f_{\th_0}(x)}=g_{\th_0,\th_1}(T(x)).$$ Suppose that a test $\phi$ is such that for some real $k$ and all $x$ $$\phi(x)=\begin{cases} 1&\text{ if }T(x)>k, \\ 0&\text{ if }T(x)<k \end{cases}$$ (the test $\phi$ may be randomized, taking any value $\phi(x)$ in the interval $[0,1]$ if $x$ is such that $T(x)=k$). Then the test $\phi$ is uniformly most powerful of level $\al:=E\phi(X)$ for the hypotheses as in the OP.

This follows because, for $c:=g_{\th_0,\th_1}(k)$, we have the implications $$r_{\th_0,\th_1}(x)>c\iff g_{\th_0,\th_1}(T(x))>c\implies T(x)>k\implies\phi(x)=1\tag{1}$$ (so that $H_0$ is rejected by the test $\phi$) and $$r_{\th_0,\th_1}(x)<c\iff g_{\th_0,\th_1}(T(x))<c\implies T(x)<k\implies\phi(x)=0\tag{2}$$ (so that $H_0$ is not rejected by the test $\phi$) -- which means that $\phi$ is a Neyman--Pearson testfor any $\th_0$ and $\th_1$ such that $\th_0<\th_1$.

That $T$ is not strictly increasing causes no problems whatsoever, because in (1) and (2) we only need the left-to-right implications.

Related Question