Apart from the fact that it should be $m-1$ instead of $n-1$ in the right-hand denominator, your estimator for $\sigma^2$ looks fine. You can do slightly better on the variance of $\hat\mu$ (though the question didn't ask to optimize it): Consider a general convex combination
$$
\alpha\frac{X_1+\dotso+X_n}n+(1-\alpha)\frac{Y_1+\dotso+Y_m}{2m}
$$
of the individual estimators for $\mu$. The variance of this combined estimator is
$$
n\left(\frac\alpha n\right)^2\sigma^2+m\left(\frac{1-\alpha}{2m}\right)^2\sigma^2=\left(\frac{\alpha^2}n+\frac{(1-\alpha)^2}{4m}\right)\sigma^2\;,
$$
and minimizing this by setting the derivative with respect to $\alpha$ to zero leads to $\alpha=n/(n+4m)$, yielding the variance $\sigma^2/(n+4m)$. For $n=m$ the variance is $\frac15\sigma^2/n=0.2\sigma^2/n$, compared to $\frac5{16}\sigma^2/n\approx0.3\sigma^2/n$ for your estimator, and for $n$ fixed and $m\to\infty$ or vice versa, the variance of this estimator tends to zero whereas the variance of your estimator tends to a non-zero value.
You could optimize the variance of your unbiased variance estimator in a similar way, though the calculation would be a bit more involved.
This assumes that the $x_{i}$ are fixed, the model is quite different if they are not. However, this assumption is reasonable given the expected value and variance of the $Y_{i}$. Usually, for any linear relationship, we have the model $$Y_i=\beta_0+\beta_1 x_i+\epsilon_i,$$ where $\beta_{0},\beta_1 \in \mathbb{R}$, the $x_i$ are fixed, and $\mathbb{E}(\epsilon_i)=0, \text{Var}(\epsilon_i)=\sigma^2$ for a constant $\sigma^{2}$. This is very similar to your context, so I will assume this is the correct context. Denote the random variables $Y_i$ with lower case $y_i$ for consistency with typical regression notation. We have $$\text{Cov}(\hat{\beta}_0,\hat{\beta}_1)=\text{Cov}(\bar{y}-\hat{\beta_{1}}\bar{x},\hat{\beta}_1) \\=\text{Cov}(\bar{y},\hat{\beta}_1)-\text{Cov}(\hat{\beta}_1\bar{x},\hat{\beta}_1)\\=\text{Cov}(\bar{y},\hat{\beta}_1)-\bar{x}\text{Cov}(\hat{\beta}_1,\hat{\beta}_1)\\=\text{Cov}(\bar{y},\hat{\beta}_1)-\bar{x}\text{Var}(\hat{\beta}_1),$$
where the third equality comes from the fact that the $x_i$ are fixed and the fourth equality comes from the definitions of variance and covariance. From here, substitute the given expression for $\hat{\beta}_1$ (this expression and the one given for $\hat{\beta}_0$ can be derived from the usual method of maximum likelihood or from the least-squares estimators, they are equivalent), and use your knowledge about covariances and variances to finish the calculation (some information that you need will come from the proposition - "the following" in your box). Important hint: you will need to write $\hat{\beta}_1$ as a linear combination of the $y_{i}$, this is the key to completing the computation. Note that the covariance function and the vector space of random variables form an inner product space, so $\text{Cov}(\cdot,\cdot)$ is linear in the first argument (and second argument as a result of the symmetry clause of inner products) and thus,
$$\text{Cov}\left(\sum_{i=1}^n c_i Z_i,X\right)=\sum_{i=1}^n c_i \text{Cov}(Z_i,X)$$
(this will be very useful as well) for constants $c_1,c_2,\ldots,c_n$ and any random variables $Z_1,Z_2,\ldots,Z_n,X$. Also, here is yet another two useful identities/derivations:$$\sum_{i} x_i(x_i - \bar{x})=\sum_i x_i(x_i - \bar{x}) - \bar{x} \sum_i(x_i - \bar{x})\\=\sum_i(x_i - \bar{x})(x_i-\bar{x})=\sum_i (x_i - \bar{x})^2= S_{xx},$$
and similarly, $$\sum_i x_i(y_i - \bar{y})=\sum_i x_i(y_i - \bar{y}) - \bar{x} \sum_i(y_i - \bar{y})\\=\sum_{i}(x_i - \bar{x})(y_i-\bar{y})= S_{xy},$$
since we know that $\sum_i(x_i-\bar{x})=\sum_i(y_i-\bar{y})=0$.
Best Answer
Additional Comment, after some thought, following an exchange of Comments with @MichaelHardy:
His answer closely parallels the usual demonstration that $E(S^2) = \sigma^2$ and is easy to follow. However, the proof below, in abbreviated notation I hope is not too cryptic, may be more direct.
$$(n-1)S_{xy} = \sum(X_i-\bar X)(Y_i - \bar Y) = \sum X_i Y_i -n\bar X \bar Y = \sum X_i Y_i - \frac{1}{n}\sum X_i \sum Y_i.$$
Hence,
$$(n-1)E(S_{xy}) = E\left(\sum X_i Y_i\right) - \frac{1}{n}E\left(\sum X_i \sum Y_i\right)\\ = n\mu_{xy} - \frac{1}{n}[n\mu_{xy} + n(n-1)\mu_x \mu_y]\\ = (n-1)[\mu_{xy}-\mu_x\mu_y] = (n-1)\sigma_{xy},$$
So the expectation of the sample covariance $S_{xy}$ is the population covariance $\sigma_{xy} = \operatorname{Cov}(X,Y),$ as claimed.
Note that $\operatorname{E}(\sum X_i \sum Y_i)$ has $n^2$ terms, among which $\operatorname{E}(X_iY_i) = \mu_{xy}$ and $\operatorname{E}(X_iY_j) = \mu_x\mu_y.$