Mathematical Statistics – Exploring the Concept of Efficiency in Estimation and Statistical Analysis

efficiencyestimationmathematical-statistics

I have some problems in understanding the concept of efficiency as related to an estimator. My sources (Mukhopadhyay, 2000 and Casella, Berger, 2002) do not treat this argument as I expected since they analyse only the concept of asymptotic efficiency.

I do not understand if there exist a concept of efficiency valid also (or only) in finite samples. Or if efficiency is not a per se concept and it is used only to compare estimators, talking about more efficient estimator and so on.

What I know is that an estimator is efficient if it reaches the Cramér-Rao Lower Bound. But this is a characterization, it is not a definition. And, moreover, Cramér-Rao inequality refers only to a subset of estimators, those which are unbiased for a certain $\tau(\theta)$. The concept of efficiency is meaningful only in the case of unbiased estimators?

If someone could provide me some sources or a brief excursus about the concept of efficiency and efficient estimators, I will be grateful.

Best Answer

Efficiency is a "per se" concept in the sense that it is a measure of how variable (and biased) the estimator is from the "true" parameter. There is an actual numeric value for efficiency associated with a given estimator at a given sample-size for a given loss function. This actual number is related to the estimator AND the sample-size AND the loss function.

Asymptotic efficiency looks at how efficient the estimator is as the sample size increases. More important is how rapidly the estimator becomes efficient but this can be more difficult to determine.

Relative efficiency looks at how efficient the estimator is relative to an alternative estimator (typically at a GIVEN sample-size).

Efficiency requires the specification of some loss function. Originally, this was variance when only unbiased estimators were considered. These days, this is most often MSE (mean-squared-error which accounts for bias and variability). Other loss-functions can be used. The classical Cramer-Rao bound was for unbiased estimators only but was extended to many of these other loss functions (most especially for MSE loss).

An important adjunct concept is admissibility and domination of estimators.

The Wikipedia entry has many links.

Related Question