Solved – Maximum likelihood estimate of two random samples from poisson distribution with means $\lambda\alpha$ and $\lambda\alpha^2$

maximum likelihoodpoisson distributionself-study

I would like to solve what seems to be a simple question on maximum likelihood which I believe can be solved by differentiation of the likelihood function.

$X_1,X_2,X_3,\ldots,X_{n_1}$ are independent Poisson random variables, each with mean $\lambda\alpha$ and $Y_1,Y_2,Y_3,\ldots,Y_{n_2}$ are independent Poisson random variables, each with mean $\lambda\alpha^2$. $\lambda$ is known. How can I find the maximum likelihood estimator of $\alpha$?

$$f(x) = \frac{e^{-\lambda\alpha}(\lambda\alpha)^x}{x!}$$
$$f(y) = \frac{e^{-\lambda\alpha^2}(\lambda\alpha^2)^y}{y!}$$

I find the likelihood function has this form $$\frac{e^{-n_1\lambda\alpha}e^{-n_2\lambda\alpha^2}(\lambda\alpha)^{\sum x}(\lambda\alpha^2)^{\sum y}}{\prod x! \prod y! }$$

and the log likelihood

$$-n_1\lambda\alpha-n_2\lambda\alpha^2 + \sum x \log(\lambda\alpha) + \sum{y}\log (\lambda\alpha^2) – \log\prod x!-\log \prod y!$$

After differentiation I get

$$-n_1\lambda – 2n_2\lambda\alpha + \frac{\sum{x}}{\alpha} + \frac{2\sum{y}}{\alpha}$$

setting this to zero yields an equation in $\alpha$ which usually is quite simple to solve but in this case I get

$$2\sum{y} + \sum{x} = n_1\lambda\alpha + 2n_2\lambda\alpha^2.$$

This doesn't seem right to me!

Best Answer

It is implied the $X_i$ are independent of the $Y_j.$ Therefore the usual maximum likelihood equations apply to the $X_i$ and the $Y_j$ separately, with solutions

$$\begin{cases} \hat\lambda \hat \alpha\ n_1 &= \sum_{i=1}^{n_1}X_i &=x \\ \hat\lambda \hat \alpha^2n_2 &=\sum_{j=1}^{n_2}X_i &=y \end{cases}$$

yielding

$$\hat\alpha = \frac{y/n_2}{x/n_1}\tag{*}$$

provided $x \ne 0;$ that is, assuming at least one $X$ event was observed. Note that $\lambda$ needn't be known and that the equation for $\hat\alpha$ really reduces to a linear one, not a quadratic one.


Simulation bears out the correctness of this solution. Since MLE is an asymptotic procedure, we don't want to test the results for small $n_1,n_2.$ This example of applying $(*)$ to 100,000 independent datasets uses $n_1=24, n_2=9$ with $\alpha=\pi$ (plotted as a gray vertical line) and $\lambda=10.$ The average estimate is plotted as a red vertical line: that the two vertical lines are nearly coincident indicates any bias is low.

Histogram

This is the R code used to produce the figure. NB In this simulation, no individual estimate $\hat \alpha$ was undefined. When the expectation of $x$ (namely, $\lambda \alpha n_1$) is small, the values of $x$ in some simulations can be zero.

n <- c(24, 9)
n.sim <- 1e5
lambda <- 10
alpha <- pi
set.seed(17)

xy <- matrix(rpois(sum(n)*n.sim, rep(c(lambda*alpha, lambda*alpha^2), n)), ncol=n.sim)
x <- colSums(xy[1:n[1], ])
y <- colSums(xy[-(1:n[1]), ])

alpha.hat <- y/n[2] / (x/n[1])
alpha.hat <- alpha.hat[!is.infinite(alpha.hat)]
hist(alpha.hat, xlab=expression(hat(alpha)), ylab="", cex.lab=1.5, 
     main="Histogram of Simulated Estimates")
abline(v=alpha, col="Gray", lwd=2)
abline(v=mean(alpha.hat), col="Red", lwd=2)
Related Question