You can do this using simulation.
Write a function that does your test and accepts the lambdas and sample size(s) as arguments (you have a good start above).
Now for a given set of lambdas and sample size(s) run the function a bunch of times (the replicate function in R is great for that). Then the power is just the proportion of times that you reject the null hypothesis, you can use the mean function to compute the proportion and prop.test to give a confidence interval on the power.
Here is some example code:
tmpfunc1 <- function(l1, l2=l1, n1=10, n2=n1) {
x1 <- rpois(n1, l1)
x2 <- rpois(n2, l2)
m1 <- mean(x1)
m2 <- mean(x2)
m <- mean( c(x1,x2) )
ll <- sum( dpois(x1, m1, log=TRUE) ) + sum( dpois(x2, m2, log=TRUE) ) -
sum( dpois(x1, m, log=TRUE) ) - sum( dpois(x2, m, log=TRUE) )
pchisq(2*ll, 1, lower=FALSE)
}
# verify under null n=10
out1 <- replicate(10000, tmpfunc1(3))
mean(out1 <= 0.05)
hist(out1)
prop.test( sum(out1<=0.05), 10000 )$conf.int
# power for l1=3, l2=3.5, n1=n2=10
out2 <- replicate(10000, tmpfunc1(3,3.5))
mean(out2 <= 0.05)
hist(out2)
# power for l1=3, l2=3.5, n1=n2=50
out3 <- replicate(10000, tmpfunc1(3,3.5,n1=50))
mean(out3 <= 0.05)
hist(out3)
My results (your will differ with a different seed, but should be similar) showed a type I error rate (alpha) of 0.0496 (95% CI 0.0455-0.0541) which is close to 0.05, more precision can be obtained by increasing the 10000 in the replicate command. The powers I computed were: 9.86% and 28.6%. The histograms are not strictly necessary, but I like seeing the patterns.
Best Answer
As mentioned by @Nick this is a consequence of Wilks' theorem. But note that the test statistic is asymptotically $\chi^2$-distributed, not $\chi^2$-distributed.
I am very impressed by this theorem because it holds in a very wide context. Consider a statistical model with likelihood $l(\theta \mid y)$ where $y$ is the vector observations of $n$ independent replicated observations from a distribution with parameter $\theta$ belonging to a submanifold $B_1$ of $\mathbb{R}^d$ with dimension $\dim(B_1)=s$. Let $B_0 \subset B_1$ be a submanifold with dimension $\dim(B_0)=m$. Imagine you are interested in testing $H_0\colon\{\theta \in B_0\}$.
The likelihood ratio is $$lr(y) = \frac{\sup_{\theta \in B_1}l(\theta \mid y)}{\sup_{\theta \in B_0}l(\theta \mid y)}. $$ Define the deviance $d(y)=2 \log \big(lr(y)\big)$. Then Wilks' theorem says that, under usual regularity assumptions, $d(y)$ is asymptotically $\chi^2$-distributed with $s-m$ degrees of freedom when $H_0$ holds true.
It is proven in Wilk's original paper mentioned by @Nick. I think this paper is not easy to read. Wilks published a book later, perhaps with an easiest presentation of his theorem. A short heuristic proof is given in Williams' excellent book.