Mathematical Statistics – Difference Between Estimating Equations and Method of Moments Estimators

estimationgeneralized-estimating-equationsmathematical-statisticsmaximum likelihoodmethod of moments

From my understanding, both are estimators that are based on first providing an unbiased statistic $T(X)$ and obtaining the root to the equation:

$$c(X) \left( T(X) – E(T(X)) \right) = 0$$

Secondly both are in some sense "nonparametric" in that, regardless of what the actual probability model for $X$ may be, if you think of $T(\cdot)$ as a meaningful summary of the data, then you will be consistently estimating that "thing" regardless of whether that thing has any probabilistic connection with the actual probability model for the data. (e.g. estimating the sample mean from Weibull distributed failure times without censoring).

However, method of moments seems to insinuate that the $T(X)$ of interest must be a moment for a readily assumed probability model, however, one estimates it with an estimating equation and not maximum likelihood (even though they may agree, as is the case for means of normally distributed random variables). Calling something a "moment" to me has the connotation of insinuating a probability model. However, supposing for instance we have log normally distributed data, is the method of moments estimator for the 3rd central moment based on the 3rd sample moment, e.g. $$\hat{\mu_3} = \frac{1}{n}\sum_{i=1}^n \left( X_i – \bar{X} \right)^3$$

Or does one estimate the first and second moment, transform them to estimate the probability model parameters, $\mu$ and $\sigma$ (whose estimates I will denote with hat notation) and then use these estimates as plug-ins for the derive skewness of lognormal data, i.e.

$$ \hat{\mu_3} = \left( \exp \left( \hat{\sigma}^2 \right) + 2\right) \sqrt{\exp \left( \hat{\sigma}^2-1\right)}$$

Best Answer

The most common justification of the method of moments is simply the law of large numbers, which would seem to make your suggestion of estimating $\mu_3$ by $\hat{\mu}_3$ "method of moments" (and I'd be inclined to call it MoM in any case).

However, a number of books and documents, such as this for example (and to some extent the wikipedia page on method of moments) imply that you take the lowest $k$ moments* and estimate the required quantities for given the probability model from that, as you imply by estimating $\mu_3$ from the first two moments.

*(where you need to estimate $k$ parameters to obtain the required quantity)

--

Ultimately, I guess it comes down to "who defines what counts as method of moments?"

Do we look to Pearson? Do we look to the most common conventions? Do we accept any convenient definition? --- Any of those choices has problems, and benefits.


The interesting bit, to me, is whether one can always or almost always reparameterize a parametric family to characterize an estimation problem in EE as the solution to the moments of a (possibly bizarre) distribution function?

Clearly there are large classes of distribution for which method of moments would be useless.

For an obvious example, the mean of the Cauchy distribution is undefined.

Even when moments exist and are finite, there could be a large number of situations where the set of equations $f(\mathbf{\theta},\mathbf{y})=0$ has 0 solutions (think of some curve that never crosses the x-axis) or multiple solutions (one that crosses the axis repeatedly -- though multiple solutions aren't necessarily an insurmountable problem if you have a way to choose between them).

Of course, we also commonly see situations where a solution exists but doesn't lie in the parameter space (there may even be cases where there's never a solution in the parameter space, but I don't know of any -- it would be an interesting question to discover if some such cases exist).

I imagine there can be more complicated situations still, though I don't have any in mind at the moment.