[Math] Maximum likelihood estimator of the difference between two normal means and minimising its variance

maximum likelihoodnormal distributionprobability theoryvariance

A random sample of size $n_1$ is to be drawn from a normal population with mean $\mu_1$ and variance $\sigma^2_1$.

A second random sample of size $n_2$ is to be drawn from a normal
population with mean $\mu_2$ and variance $\sigma^2_2$. The two samples are independent.

What is the maximum likelihood estimator of $α = \mu_1 − \mu_2$?

Assuming that the total
sample size $n = n_1 + n_2$ is fixed, how should the $n$ observations be divided between
the two populations in order to minimise the variance of $\hatα$?

I know how to find the MLEs of $\mu_1$ and $\mu_2$, but I don't know how to use these to find $α$ for the first part of this question. I don't even know where to start on minimising the variance.

Best Answer

To be strict. the parameters can be $\alpha,\sigma_1, \mu_2,\sigma_2$, so $\mu_1=\alpha+\mu_2$. We can write the negative of log likelihood as:

$\sum_1^{n_1}{\frac{(X_i-\mu_2-\alpha)^2}{2\sigma_1^2}}+\sum_1^{n_2}{\frac{(Y_i-\mu_2)^2}{2\sigma_2^2}}$

take FOC we have:

$-\sum_1^{n_1}{\frac{X_i-\mu_2-\alpha}{\sigma_1^2}}=0 \rightarrow \sum_1^{n_1}({X_i-\mu_2-\alpha})=0$

$-\sum_1^{n_1}{\frac{X_i-\mu_2-\alpha}{\sigma_1^2}}-\sum_1^{n_2}{\frac{Y_i-\mu_2}{\sigma_2^2}}=0 \rightarrow \sum_1^{n_2}({Y_i-\mu_2})=0$

So $\hat\mu_2=\bar{Y},\hat\alpha=\bar{X}-\bar{Y} $

$var(\hat\alpha)=var(\bar{X}-\bar{Y})=var(\bar{X})+var(\bar{Y})$ if $\sigma_1, \sigma_2$ are known, it would be easy to minimize.Otherwise use the t-distribution in my comments.