If one is familiar with the concepts of sufficiency and completeness, then this problem is not too difficult. Note that $f(x; \theta)$ is the density of a $\Gamma(2, \theta)$ random variable. The gamma distribution falls within the class of the exponential family of distributions, which provides rich statements regarding the construction of uniformly minimum variance unbiased estimators via notions of sufficiency and completeness.
The distribution of a random sample of size $n$ from this distribution is
$$
g(x_1,\ldots,x_n; \theta) = \theta^{2n} \exp\Big(-\theta \sum_{i=1}^n x_i + \sum_{i=1}^n \log x_i\Big)
$$
which, again, conforms to the exponential family class.
From this we can conclude that $S_n = \sum_{i=1}^n X_i$ is a complete, sufficient statistic for $\theta$. Operationally, this means that if we can find some function $h(S_n)$ that is unbiased for $\theta$, then we know immediately via the Lehmann-Scheffe theorem that $h(S_n)$ is the unique uniformly minimum variance unbiased (UMVU) estimator.
Now, $S_n$ has distribution $\Gamma(2n, \theta)$ by standard properties of the gamma distribution. (This can be easily checked via the moment-generating function.)
Furthermore, straightforward calculus shows that
$$
\mathbb{E} S_n^{-1} = \int_0^\infty s^{-1} \frac{\theta^{2n} s^{2n - 1}e^{-\theta s}}{\Gamma(2n)} \,\mathrm{d}s = \frac{\theta}{2n - 1} \>.
$$
Hence, $h(S_n) = \frac{2n-1}{S_n}$ is unbiased for $\theta$ and must, therefore, be the UMVU estimator.
Addendum: Using the fact that $\newcommand{\e}{\mathbb{E}}\e S_n^{-2} = \frac{\theta^2}{(2n-1)(2n-2)}$, we conclude that the $\mathbb{V}ar(h(S_n)) = \frac{\theta^2}{2(n-1)}$. On the other hand, the information $I(\theta)$ from a sample of size one is readily computed to be $-\e \frac{\partial^2 \log f}{\partial \theta^2} = 2 \theta^{-2}$ and so the Cramer-Rao lower bound for a sample of size $n$ is
$$
\mathrm{CRLB}(\theta) = \frac{1}{n I(\theta)} = \frac{\theta^2}{2n} \> .
$$
Hence, $h(S_n)$ does not achieve the bound, though it comes close, and indeed, achieves it asymptotically.
However, if we reparametrize the density by taking $\beta = \theta^{-1}$ so that
$$
f(x;\beta) = \beta^{-2} x e^{-x/\beta},\quad x > 0,
$$
then the UMVU estimator for $\beta$ can be shown to be $\tilde{h}(S_n) = \frac{S_n}{2 n}$. (Just check that it's unbiased!) The variance of this estimator is $\mathbb{V}ar(\tilde{h}(S_n)) = \frac{\beta^2}{2n}$ and this coincides with the CRLB for $\beta$.
The point of the addendum is that the ability to achieve (or not) the CRLB depends on the particular parametrization used and even when there is a one-to-one correspondence between two unique parametrizations, an unbiased estimator for one may achieve the Cramer-Rao lower bound while the other one does not.
Apart from the fact that it should be $m-1$ instead of $n-1$ in the right-hand denominator, your estimator for $\sigma^2$ looks fine. You can do slightly better on the variance of $\hat\mu$ (though the question didn't ask to optimize it): Consider a general convex combination
$$
\alpha\frac{X_1+\dotso+X_n}n+(1-\alpha)\frac{Y_1+\dotso+Y_m}{2m}
$$
of the individual estimators for $\mu$. The variance of this combined estimator is
$$
n\left(\frac\alpha n\right)^2\sigma^2+m\left(\frac{1-\alpha}{2m}\right)^2\sigma^2=\left(\frac{\alpha^2}n+\frac{(1-\alpha)^2}{4m}\right)\sigma^2\;,
$$
and minimizing this by setting the derivative with respect to $\alpha$ to zero leads to $\alpha=n/(n+4m)$, yielding the variance $\sigma^2/(n+4m)$. For $n=m$ the variance is $\frac15\sigma^2/n=0.2\sigma^2/n$, compared to $\frac5{16}\sigma^2/n\approx0.3\sigma^2/n$ for your estimator, and for $n$ fixed and $m\to\infty$ or vice versa, the variance of this estimator tends to zero whereas the variance of your estimator tends to a non-zero value.
You could optimize the variance of your unbiased variance estimator in a similar way, though the calculation would be a bit more involved.
Best Answer
$1/(\bar{X}+1)$ is not an unbiased estimator of geometric$(p)$ distribution (number of failures before the first success), but $1/\left(\frac{n}{n-1}\bar{X}+1\right)$ is. Note that $Y_n:=n\bar{X}=\sum_{i=1}^nX_i$ has negative-binomial$(p,n)$ distribution (number of failures before $n$-th success). Then $E\left[1/\left(\frac{n}{n-1}\bar{X}+1\right)\right] = E\left[\frac{n-1}{Y+n-1}\right] =$ $\sum_{y=0}^{\infty} \frac{n-1}{y+n-1} \binom{y+n-1}{n-1} p^n(1-p)^y = p\sum_{y=0}^{\infty} \binom{y+n-2}{n-2}p^{n-1}(1-p)^y=p$ (as desired) as the sum denotes the mass function of negative-binomial$(p,n-1)$ distribution.