This may be considered... cheating, but the OLS estimator is a MoM estimator. Consider a standard linear regression specification (with $K$ stochastic regressors, so magnitudes are conditional on the regressor matrix), and a sample of size $n$. Denote $s^2$ the OLS estimator of the variance $\sigma^2$ of the error term. It is unbiased so
$$ MSE(s^2) = \operatorname {Var}(s^2) = \frac {2\sigma^4}{n-K} $$
Consider now the MLE of $\sigma^2$. It is
$$\hat \sigma^2_{ML} = \frac {n-K}{n}s^2$$
Is it biased. Its MSE is
$$MSE (\hat \sigma^2_{ML}) = \operatorname {Var}(\hat \sigma^2_{ML}) + \Big[E(\hat \sigma^2_{ML})-\sigma^2\Big]^2$$
Expressing the MLE in terms of the OLS and using the expression for the OLS estimator variance we obtain
$$MSE (\hat \sigma^2_{ML}) = \left(\frac {n-K}{n}\right)^2\frac {2\sigma^4}{n-K} + \left(\frac {K}{n}\right)^2\sigma^4$$
$$\Rightarrow MSE (\hat \sigma^2_{ML}) = \frac {2(n-K)+K^2}{n^2}\sigma^4$$
We want the conditions (if they exist) under which
$$MSE (\hat \sigma^2_{ML}) > MSE (s^2) \Rightarrow \frac {2(n-K)+K^2}{n^2} > \frac {2}{n-K}$$
$$\Rightarrow 2(n-K)^2+K^2(n-K)> 2n^2$$
$$ 2n^2 -4nK + 2K^2 +nK^2 - K^3 > 2n^2 $$
Simplifying we obtain
$$ -4n + 2K +nK - K^2 > 0 \Rightarrow K^2 - (n+2)K + 4n < 0 $$
Is it feasible for this quadratic in $K$ to obtain negative values? We need its discriminant to be positive. We have
$$\Delta_K = (n+2)^2 -16n = n^2 + 4n + 4 - 16n = n^2 -12n + 4$$
which is another quadratic, in $n$ this time. This discriminant is
$$\Delta_n = 12^2 - 4^2 = 8\cdot 16$$
so
$$n_1,n_2 = \frac {12\pm \sqrt{8\cdot 16}}{2} = 6 \pm 4\sqrt2 \Rightarrow n_1,n_2 = \{1, 12\}$$
to take into account the fact that $n$ is an integer. If $n$ is inside this interval we have that $\Delta_K <0$ and the quadratic in $K$ takes always positive values, so we cannot obtain the required inequality. So: we need a sample size larger than 12.
Given this the roots for $K$-quadratic are
$$K_1, K_2 = \frac {(n+2)\pm \sqrt{n^2 -12n + 4}}{2} = \frac n2 +1 \pm \sqrt{\left(\frac n2\right)^2 +1 -3n}$$
Overall : for sample size $n>12$ and number of regressors $K$ such that $\lceil K_1\rceil <K<\lfloor K_2\rfloor $
we have
$$MSE (\hat \sigma^2_{ML}) > MSE (s^2)$$
For example, if $n=50$ then one finds that the number of regressors must be $5<K<47$ for the inequality to hold. It is interesting that for small numbers of regressors the MLE is better in MSE sense.
ADDENDUM
The equation for the roots of the $K$-quadratic can be written
$$K_1, K_2 = \left(\frac n2 +1\right) \pm \sqrt{\left(\frac n2 +1\right)^2 -4n}$$
which by a quick look I think implies that the lower root will always be $5$ (taking into account the "integer-value" restriction) -so MLE will be MSE-efficient when regressors are up to $5$ for any (finite) sample size.
A general answer is that an estimator based on a method of moments is not invariant by a bijective change of parameterisation, while a maximum likelihood estimator is invariant. Therefore, they almost never coincide. (Almost never across all possible transforms.)
Furthermore, as stated in the question, there are many MoM estimators. An infinity of them, actually. But they are all based on the empirical distribution, $\hat{F}$, which may be seen as a non-parametric MLE of $F$, although this does not relate to the question.
Actually, a more appropriate way to frame the question would be to ask when a moment estimator is sufficient, but this forces the distribution of the data to be from an exponential family, by the Pitman-Koopman lemma, a case when the answer is already known.
Note: In the Laplace distribution, when the mean is known, the problem is equivalent to observing the absolute values, which are then exponential variates and part of an exponential family.
Best Answer
Unbiasedness isn't necessarily especially important on its own.
Aside a very limited set of circumstances, most useful estimators are biased, however they're obtained.
If two estimators have the same variance, one can readily mount an argument for preferring an unbiased one to a biased one, but that's an unusual situation to be in (that is, you may reasonably prefer unbiasedness, ceteris paribus -- but those pesky ceteris are almost never paribus).
More typically, if you want unbiasedness you'll be adding some variance to get it, and then the question would be why would you do that?
Bias is how far the expected value of my estimator will be too high on average (with negative bias indicating too low).
When I'm considering a small sample estimator, I don't really care about that. I'm usually more interested in how far wrong my estimator will be in this instance - my typical distance from right... something like a root-mean-square error or a mean absolute error would make more sense.
So if you like low variance and low bias, asking for say a minimum mean square error estimator would make sense; these are very rarely unbiased.
Bias and unbiasedness is a useful notion to be aware of, but it's not an especially useful property to seek unless you're only comparing estimators with the same variance.
ML estimators tend to be low-variance; they're usually not minimum MSE, but they often have lower MSE than than modifying them to be unbiased (when you can do it at all) would give you.
As an example, consider estimating variance when sampling from a normal distribution $\hat{\sigma}^2_\text{MMSE} = \frac{S^2}{n+1}, \hat{\sigma}^2_\text{MLE} = \frac{S^2}{n}, \hat{\sigma}^2_\text{Unb} = \frac{S^2}{n-1}$ (indeed the MMSE for the variance always has a larger denominator than $n-1$).