Solved – When do we use local alternatives

asymptoticshypothesis testinglocal-statisticsmathematical-statistics

I am trying to get a better grasp of the local asymptotic framework, i.e. for parameter sequences of the form:

$$\theta_n = \theta_0 + \frac{h}{\sqrt{n}} $$

Where $n$ denotes sample size and $\theta_n$ corresponds to a sequence of probability distributions on the same sigma algebra and sample space. I would like to know reasons for and against adopting a local asymptotic framework, compared to the standard approach of fixing parameter values and letting $n$ go to infinity.

I came across two reasons for adopting a local asymptotic framework:

First, in a local asymptotic framework, slightly disturbing estimators
that dominate the Maximum Likelihood estimator under fixed asymptotics, such as the Hodges estimator, are rendered inconsistent. So the local asymptotic framework helps to point out shortcomings of some "weird" estimators. For a detailed example, see the first set of David Pollard's "Asymptopia" handouts. p. 12-13.

Second, heuristics suggest that local asymptotics may be a more interesting approximation to the statistical decision problem that we face in practice. Take the toy example, where we observe realizations $\epsilon_i \sim \mathcal{N}(\theta_n,1)$ and test the null hypothesis $H_0: \theta = 0$ vs. $H_1: \theta = \bar{\theta}$, with $\bar{\theta} := n^{-1}\sum_{i=1}^n \epsilon_i$.

First look at the properties of the test when $\theta_n$ is fixed at the value of $1$. The variance of the estimator will be $\frac{1}{n}$, so under the null hypothesis the distribution of statistic collapses at $0$. Since $\bar{\theta}$ will approach the true value of one, we will reject $H_0$ with probability one as $n \to \infty$ ("the test is consistent").

Now, let $\theta_n = n^{-0.5}$. For each value of $n$, the probability of observing $\bar{\theta} = \theta_n$ will be the same under the null (ca. 15% in a one-sided test). See the figure, where the two cases are plotted, with $n \in \{1,10\}$.

The second example seems more appropriate when we see asymptotics as an approximation to the situation that we face with a given sample. We are typically comparing two hypotheses were neither is obviously true or false. The local framework seems to preserve this property asymptotically.

enter image description here

Since fixed asymptotics are still widely used, there must be reasons against employing local asymptotics. So what are these reasons? And what would be further reasons for using local asymptotics?

Best Answer

The LAN use mentioned by @nth is in my experience a little less common in practice, and mainly of theoretical interest. More common uses I know of are

  1. Relative efficiency of estimators: since most reasonable tests have asymptotically trivial power functions, i.e., completely error-free, we can't directly compare tests using the asymptotic power functions (and finite sample power functions are usually hard to compute). So we "normalize" the tests using local alternatives to obtain a non-degenerate power functions in the limit, which can then be used to compare test sequences (with many caveats). You can read about this use in van der Vaart, the same text cited by @nth.

  2. Approximating the finite-sample power of an alternative, or carrying out approximate sample size calculations. This is very useful, analogous to using an asymptotic distribution to approximate a finite-sample confidence interval. You can find an example Thomas Ferguson, A course in large-sample theory, chapter 10.

Related Question