You could do a likelihood ratio test. Calculate the MLE for each data set separately:
$$ L_1 \equiv \max_{\mu_{1}, \sigma_{1}} L_{1}(\mu_{1}, \sigma_{1}) $$
$$ L_2 \equiv \max_{\mu_{2}, \sigma_{2}} L_{2}(\mu_{2}, \sigma_{2}) $$
where $L_1$ is the log-likelihood function for the first data set and $L_2$ is the log-likelihood function for the second. Then, if the two data sets are independent, the maximized log-likelihood for the full data set (i.e. the two data sets together) is $L_1 + L_2$. This is the maximized log-likelihood when the two data sets are not restricted to having the same mean and variance.
Now, to get the MLE under the constraint that the two populations do have the same mean, you calculate
$$ L_{0} = \max_{\mu, \sigma} L(\mu, \sigma) $$
where $L$ is the log-likelihood function for the full data set. Then, under the null hypothesis you specified in your question,
$$ \lambda = 2 \bigg( (L_1 + L_2) - L_0 \bigg) $$
has an approximate (i.e. asymptotic) $\chi^2$ distribution on 2 degrees of freedom, assuming that the null hypothesis being tested doesn't include $\sigma_1 = \sigma_2 = 0$, which clearly can't be the case if you observe non-zero variance in your data. You can use that null distribution for significance testing.
Note: The joint MLE for the normally distributed data is the sample mean:
$$ \hat{\mu} = \frac{1}{n} \sum_{i=1}^{n} X_i $$
and the sample variance:
$$ \hat{\sigma}^{2} = \frac{1}{n} \sum_{i=1}^{n} (X_i-\hat{\mu})^2$$
Let $x_1,...,x_n\sim N(\mu_1,\sigma^2)$ and $y_1,...,y_m\sim N(\mu_2,\sigma^2)$. Under $H_0$, the likelihood function is
$$L_0=\left(\frac{1}{\sqrt{2\pi\sigma_0^2}}\right)^nexp\left( \frac{-1}{2\sigma_0^2}\left( \sum_{i=1}^{n}(x_i-\bar{x})^2 + \sum_{i=1}^{m}(y_i-\bar{y})^2 \right) \right)=\left(\frac{1}{\sqrt{2\pi\sigma_0^2}}\right)^nexp\left( \frac{-1}{2\sigma_0^2}\left( (n-1)S_1^2 + (m-1)S_2^2 \right) \right).$$
Under $H_a$ we get:
$$L_a=\left(\frac{1}{\sqrt{2\pi\sigma_a^2}}\right)^nexp\left( \frac{-1}{2\sigma_a^2}\left( (n-1)S_1^2 + (m-1)S_2^2 \right) \right).$$
Inside the latter exponent, we have the same sum of $S_1^2$ and $S_2^2$ but multiplied by $\frac{1}{2\sigma_a^2}$, which can be presented as $\frac{\sigma_0^2}{\sigma_a^2}\frac{1}{2\sigma_0^2}$. The LRT is then
$$\lambda=\frac{L_0}{L_a}=\left( \frac{\sqrt{\sigma_a}}{\sqrt{\sigma_0}} \right)^nexp\left( \frac{-1}{2\sigma_0^2}( (n-1)S_1^2 + (m-1)S_2^2) + \frac{\sigma_0^2}{\sigma_a^2}\frac{1}{2\sigma_0^2}( (n-1)S_1^2 + (m-1)S_2^2) \right)=\left( \frac{\sqrt{\sigma_a}}{\sqrt{\sigma_0}} \right)^nexp\left(-\frac{1}{2} \left( 1- \frac{\sigma_0^2}{\sigma_a^2}\right)\frac{1}{\sigma_0^2}( (n-1)S_1^2 + (m-1)S_2^2) \right).$$
Given that $\sigma_a>\sigma_0$, the expression $\left( \frac{\sqrt{\sigma_a}}{\sqrt{\sigma_0}} \right)^n$ is larger that 1 and $\left( 1- \frac{\sigma_0^2}{\sigma_a^2}\right)$ is positive, for all $n$. Thus, our test depends on $\frac{1}{2\sigma_0^2}( (n-1)S_1^2 + (m-1)S_2^2)$ (note I ignored the minus 0.5 there, as it is a constant), which is a $\chi^2$ variable with $n+m-2$ degrees of freedom (why? simple to find).
Ultimately, our test is in the form of
$$\left\{ \frac{1}{\sigma_0^2}\left( (n-1)S_1^2 + (m-1)S_2^2\right) > \chi^2_{\alpha,~n+m-2} \right\}$$
Best Answer
A likelihood ratio statistic is equal to: $$ \mathcal{L}(\hat{\mu}_{M}) - \mathcal{L}(\hat{\mu}_{constr}) $$ where $\mathcal{L}(\cdot)$ is the normal log-likelihood function, $\hat{\mu}_{M}$ is the maximum-likelihood estimator, and $$ \hat{\mu}_{constr} = {\arg\max}_{\{\mu \in \mathbb{R}^2: \ \mu_1\mu_2 = 0 \}} \mathcal{L}(\mu) $$ You can easily solve for the constrained estimator by computing