Solved – LRT for one-sided Bernoulli parameter

binomial distributionhypothesis testinglikelihood-ratioself-study

Suppose $X_1,X_2,…,X_n$ are i.i.d. $\mathrm{Bernoulli}(\theta)$. We are interested in testing the hypotheses $$H_0:\theta\leq\theta_0$$vs. $$H_1:\theta>\theta_0$$ Show that if we use the Likelihood Ratio Test, then we will end up with the test $$\text{Reject}\space H_0 \space\text{if}\space\sum_{i=1}^nX_i>b$$for some positive constant $b$.

We know that for a likelihood ratio test, if $T(X)$ is a sufficient statistic then it is equivalent to consider the rejection region w.r.t. $T$. Now $T(X)=\sum_{i=1}^nX_i$ is a sufficient statistic which follows a $\mathrm{Binomial}(n,\theta)$ distribution.

I observe that my LRT statistic has the following form:

$$\lambda(X)=\dfrac{\sup_{\theta\leq \theta_0}g(T(X)|\theta)}{\sup_{\theta\in[0,1]}g(T(X)|\theta)}$$ where $g$ is the pmf of $T(X)$;$g(y)=$$n\choose y$$\theta^y(1-\theta)^{n-y}$.

Also I observe, after some calculations, that $$\lambda(X)=1,\space\text{if}\space T(X)\leq n\theta_0 $$ and $$\lambda(X)=\left(\dfrac{n\theta_0}{T(X)}\right)^{T(X)}\left(\dfrac{n-n\theta_0}{n-T(X)}\right)^{n-T(X)}\space\text{if}\space T(X)>n\theta_0$$

I do not know what my conclusion will be once I write $\lambda(X)<c$ for some $c\in[0,1]$. Clearly $T(X)>n\theta_0$ is the case to be considered but the expression of $\lambda(X)$ is not really yielding anything.

Best Answer

You can take the derivative of $\log(\lambda)$ in order to get a rejection region. This is problem 8.3 in (1).

(1) Casella, G., and Berger, R. L. (2002). Statistical inference. Duxbury Press.