Solved – Maximum Likelihood Estimators from two normal populations

maximum likelihoodnormal distributionself-studyvariance

I've been struggling with this challenge in my notes for a while:

Suppose I have a sample of size n taken from each of:
$X_1 \sim N(a+b,\sigma^2)$ and $X_2 \sim N(a-b,\sigma^2)$ where $a, b > 0$.

The sample observations are denoted $x_{ij}$ , $i = 1, 2$ and $j = 1, . . . , n$.

What are the maximum likelihood estimators of $a$, $b$ and $\sigma^2$?

(Edit: Having now observed the rules of the forum, I'm sorry for having asked this question so broadly and I should have noted where my issues lay and for more general advice.)

My issue here is where to start since I had the idea to set $a+b$ as $\mu_1$ and $a-b$ as $\mu_2$, but I have no idea how to then use $\hat{\mu_1}$ and $\hat{\mu_2}$ to find $\hat{a}$ and $\hat{b}$.

Best Answer

Since this could possibly be a homework problem, I'm not going to write a full solution.

  1. Step 1: Write the joint distribution of observing the samples of $X_1$ and $X_2$. If we see this as a function of a, b, and $\sigma^2$, then, we call it the likelihood function. Take the log of this expression to get log-likelihood. MLE is searching the values of a, b, and $\sigma^2$ that maximize the likelihood, and hence, the log-likelihood.
  2. Step 2: Differentiate the log-likelihood with respect to a and b, and equate to zero. We are trying to find values of a and b which maximize the likelihood of observing the samples, so, we use the stationarity of first-order differential at maximum. This would give you $a=\dfrac{\sum_i x_{1,i}+\sum_i x_{2,i}}{2n}$ and $b=\dfrac{\sum_i x_{1,i}-\sum_i x_{2,i}}{2n}$.
  3. Step 3: Differentiate the log-likelihood with respect $\sigma^2$. This would give you $\sigma^2=\dfrac{\sum_i (x_{1,i}-\bar{x_1})^2+\sum_i (x_{2,i}-\bar{x_2})^2}{2n}$ where $\bar{x_j}=\dfrac{\sum_i x_{j,i}}{n}$.
  4. PS: As pointed out in the comment, we need to incorporate the constraints $a, b\gt 0$ into the likelihood maximization. So instead of maximizing $\mathcal{LL}$ (log-likelihood), we maximize $\mathcal{LL}+\dfrac{\lambda}{\sigma^2} a +\dfrac{\nu}{\sigma^2} b$ where $\lambda$, and $\nu$ are unknown. I have added the $\sigma^2$ to $\lambda$ and $\nu$ terms to simplify the solution. The new solutions will be $a=\dfrac{\sum_i x_{1,i}+\sum_i x_{2,i}-\lambda}{2n}$, $b=\dfrac{\sum_i x_{1,i}-\sum_i x_{2,i}-\nu}{2n}$ and $\sigma^2=\dfrac{\sum_i (x_{1,i}-(a+b))^2+\sum_i (x_{2,i}-(a-b))^2}{2n}$. If a or b without the constriant is less than zero, we equate it to zero by giving a non-zero value to $\lambda$ or $\nu$.
Related Question