Solved – Determining critical value of likelihood ratio test for two Poisson distributions

inferencelikelihood-ratiomaximum likelihoodr

Let $X_1,X_2$ be two independent Poisson random variables with $X_1 \sim \text{Pois}(\lambda_1)$ and $X_2 \sim \text{Pois}(\lambda_2)$. Find the likelihood ratio test for $H_0:\, \lambda_1 = \lambda_2$ vs $H_a:\, \lambda_1 \neq \lambda_2$ with 0.05 significance level.

This is what I did:
$L(\lambda_1,\lambda_2)=$$\lambda_1^{X_1} e^{-\lambda_1}\lambda_2^{X_2} e^{-\lambda_2} \over X_1!X_2! $

After partial differentiating with respect to $\lambda_1$ and $\lambda_2$ respectively I get the maximum likelihood estimates $\hat\lambda_1=X_1$ and $\hat\lambda_2=X_2$.

Then $sup_{\theta\in \Theta}L(\lambda_1,\lambda_2)=$$X_1^{X_1} e^{-(X_1+X_2)}X_2^{X_2}\over X_1!X_2! $

Under$ H_0$ when $\lambda_1=\lambda_2=\lambda$ I get $\hat\lambda={X_1+X_2\over2}$.

Then $sup_{\theta\in \omega}L(\lambda_1,\lambda_2)=$$({X_1+X_2\over2})^{X_1+X_2} e^{-(X_1+X_2)}\over X_1!X_2! $

Therefore likelihood ratio: $\Lambda={sup_{\theta\in \Theta}\over sup_{\theta\in \omega}}={X_1^{X_1}X_2^{X_2} 2^{(X_1+X_2)}\over(X_1+X_2)^{(X_1+X_2)}}$

Therefore Decision rule, Reject $H_0$ if $\Lambda >k$ where k>1 such that
$sup_ {\theta\in \omega} Pr(\Lambda>k $ when$ \lambda_1=\lambda_2)<=0.05$
So I wrote the following R code to determine k

poisson<-function(nsim,lam){
    delta<-c()
    for (i in 1:nsim){
        x1<-rpois(1,lam)
        x2<-rpois(1,lam)
        d<-((x1^x1)*(x2^x2)*2^(x1+x2))/((x1+x2)^(x1+x2))
        delta<-c(delta,d)
    }
delta
}  

p1<-poisson(10000,10)
quantile(p1,0.95)   

I get

    95% 
7.333328

So my k=7.33.(I have taken $\lambda=10$).
Is my k correct?
Also to determine k does the value of $\lambda$ matter?

Then in order to come up with a power function I wrote the following R code as
power=Pr(Reject $H_0$ when $H_0$ is false)=Pr($\Lambda>7.33$ when $\lambda_1\neq\lambda_2$).
I chose arbitrary $\lambda_1 and \lambda_2$ such that $\lambda_1\neq\lambda_2$ and came up with the following

    powerCalc<-function(lambda1,lambda2,critvalue){
    power<-c()
    for (i in 1:length(lambda1)){
        delta<-c()
        for(j in 1:10000){
        x1<-rpois(1,lambda1[i])
        x2<-rpois(1,lambda2[i])
        d<-((x1^x1)*(x2^x2)*2^(x1+x2))/((x1+x2)^(x1+x2))
        delta<-c(delta,d)
        }
        y=sum((delta>critvalue)*1)/10000
        power<-c(power,y)

    }
power
}
lambda1<-c(10,15,18,4,9)
lambda2<-c(12,13,8,5,11)
f<-powerCalc(lambda1,lambda2,7.3)

But the powers I get are > f
[1] 0.0729 0.0641 0.5150 0.0617 0.0755

Why do I get so low power values? Is my power function wrong?

Best Answer

The power really is that low when you're dealing with such small numbers. If you multiplied your rates by ten, you'd have more observations and therefore more weight of evidence with which to reject the null hypothesis.

Examples using an exact Binomial test: (edit to note: my r are your lambda)

     r1 r2        pwr
[1,] 10 12 0.04413046
[2,] 15 13 0.04211265
[3,] 18  8 0.42347262
[4,]  4  5 0.02698215
[5,]  9 11 0.04530927

      r1  r2       pwr
[1,] 100 120 0.2483590
[2,] 150 130 0.2054563
[3,] 180  80 0.9990446
[4,]  40  50 0.1562124
[5,]  90 110 0.2690279

You can reproduce this, but a tweak is needed to your code because it breaks for "large" r1 and r2 owing to the way that d is calculated. If you work with logs instead then you can reproduce these results. Specifically, change the d calculation to

d<-x1*slog(x1)+x2*slog(x2)+(x1+x2)*log(2)-(x1+x2)*slog(x1+x2)

where slog(x)=function(x)log(x+0.0001), then observe the (simulated) power with your original rates, e.g. ...

[1] 0.0712 0.0634 0.5129 0.0825 0.0728
[1] 0.0694 0.0650 0.4989 0.0572 0.0705
[1] 0.0619 0.0687 0.4907 0.0572 0.0641

and the (simulated) power for those lambda* times ten:

[1] 0.2590 0.2100 0.9999 0.1770 0.2868
[1] 0.2613 0.2160 1.0000 0.1754 0.2806
[1] 0.2615 0.2065 1.0000 0.1741 0.2758