Checking the MLE: From your specification of the problem, your log-likelihood function is:
$$\begin{equation} \begin{aligned}
\mathcal{l}_{\boldsymbol{x},\boldsymbol{y}}(\theta, \lambda)
&= \sum_{i=1}^m \ln p (x_i | \lambda) + \sum_{i=1}^n \ln p (y_i | \theta, \lambda) \\[8pt]
&= \sum_{i=1}^m (\ln \lambda - \lambda x_i) + \sum_{i=1}^n (\ln \theta + \ln \lambda - \theta \lambda y_i) \\[8pt]
&= m ( \ln \lambda - \lambda \bar{x} ) + n ( \ln \theta + \ln \lambda - \theta \lambda \bar{y}).
\end{aligned} \end{equation}$$
This gives the score functions:
$$\begin{equation} \begin{aligned}
\frac{\partial \mathcal{l}_{\boldsymbol{x},\boldsymbol{y}}}{\partial \theta}(\theta, \lambda)
&= n \Big( \frac{1}{\theta} - \lambda \bar{y} \Big), \\[8pt]
\frac{\partial \mathcal{l}_{\boldsymbol{x},\boldsymbol{y}}}{\partial \lambda}(\theta, \lambda)
&= m \Big( \frac{1}{\lambda} - \bar{x} \Big) + n \Big( \frac{1}{\lambda} - \theta \bar{y} \Big).
\end{aligned} \end{equation}$$
Setting both partial derivatives to zero and solving the resulting score equations yields the MLEs:
$$\hat{\theta}_{m,n} = \frac{\bar{x}}{\bar{y}} \quad \quad \quad \hat{\lambda}_{m,n} = \frac{1}{\bar{x}}.$$
(Note that in the case where $\bar{y} = 0$ the first of the score equations is strictly positive and so the MLE for $\theta$ does not exist.) This confirms your calculations of the MLE.
Adjusting the MLE to remove bias: Treating the MLE as a random variable we have:
$$\hat{\theta}_{m,n} = \frac{n}{m} \cdot \frac{\dot{X}}{\dot{Y}},$$
where $\dot{X} \equiv m \bar{X} \sim \text{Gamma} (m, \lambda)$ and $\dot{Y} \equiv n \bar{Y} \sim \text{Gamma} (n, \theta \lambda)$ are independent random variables. From this equation, the MLE is a scaled beta-prime random variable:
$$\hat{\theta}_{m,n} \sim \theta \cdot \frac{n}{m} \cdot \text{Beta-Prime}(m, n).$$
This estimator has expected value $\mathbb{E} (\hat{\theta}_{m,n}) = \frac{n}{n-1} \cdot \theta$, which means that it has positive bias. We can correct this bias by using the bias-adjusted MLE:
$$\tilde{\theta}_{m,n} = \frac{n-1}{n} \cdot \frac{\bar{X}}{\bar{Y}} \sim \theta \cdot \frac{n-1}{m} \cdot \text{Beta-Prime}(m, n).$$
Standard-Error of the adjusted MLE: The adjusted MLE is unbiased. It has variance:
$$\begin{equation} \begin{aligned}
\mathbb{V}(\tilde{\theta}_{m,n})
&= \int \limits_0^\infty \Big( \theta \cdot \frac{n-1}{m} \cdot r - \theta \Big)^2 \text{Beta-Prime} ( r | m, n) dr \\[8pt]
&= \theta^2 \cdot \frac{\Gamma(m) \Gamma(n)}{\Gamma(m+n)} \int \limits_0^\infty \Big( 1 - \frac{n-1}{m} \cdot r \Big)^2 r^{m-1} ( 1 + r )^{-m-n} dr \\[8pt]
&= \theta^2 \cdot \frac{n+m-1}{m(n-2)}.
\end{aligned} \end{equation}$$
The corresponding standard error is:
$$\text{se}(\tilde{\theta}_{m,n}) = \tilde{\theta}_{m,n} \cdot \sqrt{\frac{n+m-1}{m(n-2)}}.$$
Letting $\phi \equiv m/n$ and taking the limit as $n \rightarrow \infty$ we obtain the asymptotic approximation:
$$\text{se}(\tilde{\theta}_{m,n}) \approx \frac{\tilde{\theta}_{m,n}}{\sqrt{n-2}} \cdot \sqrt{\frac{1+\phi}{\phi}}.$$
This gives you both exact and approximate expressions for the standard error. I hope that is helpful. (Please make sure to review my algebra to make sure I haven't made a mistake!)
Best Answer
Some simplification is possible. Note that if $X_m$ is known, then consider the transformation $Y = X/X_m$, hence $Y$ is Pareto with shape $\alpha$ and location $1$ with density $f_Y(y) = \alpha y^{-(\alpha+1)} \mathbb 1 (y \ge 1)$. Then the transformed sample is simply $$\boldsymbol Y = (y_1, y_2, \ldots, y_n) = \boldsymbol X/X_m = (x_1/X_m, x_2/X_m, \ldots, x_n/X_m).$$ So for the purposes of estimation and efficiency, we can work with $\boldsymbol Y$, or equivalently, assume $X_m = 1$, because this transformation is one-to-one.
That said, the efficiency of an estimator $w(\theta)$ of $\theta$ is defined as $$\mathcal E(w(\theta)) = \frac{1/\mathcal I(\theta)}{\operatorname{Var}[w(\theta)]},$$ where $\mathcal I(\theta)$ is the Fisher information. So you need to compute the variance of each estimator.
I leave it as an exercise to show that for your distribution, the Fisher information is: $$\mathcal I(\alpha) = \frac{n}{\alpha^2}. \tag{1}$$ The exact variance of the MLE is: $$\operatorname{Var}[\hat \alpha] = \frac{(n \alpha)^2}{(n-1)^2 (n-2)}, \quad n > 2, \alpha > 1. \tag{2}$$ The asymptotic variance of the method of moments estimator via the CLT and the delta method is: $$\operatorname{Var}[\tilde \alpha] = \frac{\alpha (\alpha-1)^2 ((n-1)\alpha - 2n)}{n^2 (\alpha-2)^2}. \tag{3}$$