Estimation Methods – How to Prove Unbiasedness, Consistency, and Normality of an Estimator Without Closed Form

asymptoticsbiasestimation

My estimator looks like this:

$$
\hat\theta(X) = \arg\max_{\theta} \frac1N \sum_{n=1}^N f(x_n|\theta)
$$

Here, $f(x_n|\theta)$ is some arbitrary function: it's not a logarithm, and the sum is not a log-likelihood.

I don't think that there's a closed form expression for $\hat\theta(X)$, so I'm using numerical optimization (gradient descent, Newton's method) to find it. However, this doesn't seem to let me calculate the expectation $\mathbb{E}\hat\theta(X)$ to prove even unbiasedness of the estimator.

How does one theoretically (without simulations) prove that such an estimator is (un)biased, (in)consistent, (non-)normal and so on?

Best Answer

Your estimator is what's known as an M-estimator of $\rho$-type, where in this case $\rho = -f$.

If your function $f$ is differentiable, then it is known that under some (fairly strong) conditions, the M-estimator is consistent for the true maximizer of $f$, and is in fact asymptotically normal. See Chapter 7 of Boos & Stefanski's Essential Statistical Inference (2013) for a detailed treatment of M-estimation.

Related Question