Asymptotic variance of an estimator

convergence-divergenceparameter estimationprobability theorystatisticsweak-convergence

Suppose we have an estimator (i.e. a sequence of estimators) $T_n$ which is asymptotically normal, in the sense that $\sqrt{n}(T_n – \theta)$ converges in distribution to $\mathcal{N}(0, \sigma^2)$. The variance $\sigma^2$ is usually called the asymptotic variance of the estimator, but can we write that $\lim_{n\to\infty}\textrm{Var}[\sqrt{n}T_n]=\sigma^2$ ? If not, what additional conditions on the sequence $T_n$ we would need in order to do so ? Are consistency of $T_n$ and uniform integrability of $T_n^2$ sufficient conditions ?

Best Answer

I don't think you can get away with anything less than the uniform integrability of $(\sqrt{n} (T_n - \theta))^2$ and its weak convergence to $\mathcal{N}(0, \sigma^2)$. Note that if $T_n = n^{-1}\sum_{i=1}^n \xi_i$ for some iid $\xi_i$ with $E \xi_1 = 0$ and $E \xi_1^2 < \infty$, then $(\sqrt{n} T_n)^2$ is uniformly integrable (why?). There should also be a one-liner way of doing this, by appeal to some convergence theorem, or else using a trick like Skorokhod's representation theorem. I would be curious to know a shorter way; below is the "direct" analysis way.

Let $Y_n = \sqrt{n}(T_n - \theta)$ and let $Y$ be $\mathcal{N}(0, \sigma^2)$. The weak convergence of $Y_n$ to $Y$ means that, for any bounded continuous function $f$ (I write $f \in C_b$), $E[f(Y_n)] \rightarrow E[f(Y)].$ Unfortunately, the function $f(y) = y^2$ is not bounded on $\mathbb{R}$. We will have to approximate $f(y)$ by a sequence $\{f_M\} \subset C_b$ and take limits; this is where uniform integrability of $Y_n^2$ will come in.

For $0 < M < \infty$, define $f_M(y) = y^2 \wedge M$, and note that $f_M \in C_b$. We wish to show that $E[f(Y_n)] \rightarrow E[f(Y)]$, where $f(y) = y^2$. We have, for any $M$, \begin{align*} |E[f(Y_n)] - E[f(Y)]| &\leq |E[f(Y_n)] - E[f_M(Y_n)]| \tag{1}\\ &+ |E[f_M(Y_n)] - E[f_M(Y)]| \tag{2}\\ &+ |E[f_M(Y)] - E[f(Y)]|. \tag{3} \end{align*} We will use uniform integrability to pick an $M$ which bounds the first and the last term uniformly in $n$. Then, for fixed $M$, we can pick $n$ large enough to make the middle term as small as desired using the weak convergence of $Y_n$ to $Y$.

Let $\varepsilon > 0$. Since $\{Y_n^2\}_{n\geq 1}$ is uniformly integrable, so is $\{Y_n^2\}_{n \geq 1} \cup \{Y^2\}$. By uniform integrability, there is $M \in (0, \infty)$ such that $$\sup_{n \geq 1} E[1\{Y_n^2 \geq M\} Y_n^2] < \varepsilon/8, \quad E[1\{Y^2 \geq M\} Y^2] < \varepsilon/8.\tag{4}$$ Fix such an $M$ once and for all. By the weak convergence of $Y_n$ to $Y$, for the fixed function $f_M \in C_b$ and $\varepsilon > 0$, there is $N<\infty$ depending only on $f_M$ and $\varepsilon$ such that, for all $n \geq N$, $$|E[f_M(Y_n)] - E[f_M(Y)]| \leq \varepsilon/2.\tag{5}$$ We have set our estimates, and what follows below holds for the given $\varepsilon > 0$, and any $n \geq N$.

To use $(4)$ in $(1)$, note that \begin{align} &\quad|E[f(Y_n)] - E[f_M(Y_n)]| \\ &= E[(f(Y_n) - f_M(Y_n))1\{Y_n^2 \geq M\}|] + E[(f(Y_n) - f_M(Y_n))1\{Y_n^2 < M\}|] .\tag{6} \end{align} The first term in $(6)$ is restricted to the event $\{Y_n^2 \geq M\}$, and each term $f(Y_n)$ and $f_M(Y_n)$ contributes little to the expectation: we have for any $n \geq 1$, $$E[f(Y_n) 1\{Y_n^2 \geq M\}] = E[Y_n^2 1\{Y_n^2 \geq M\}] < \varepsilon/8,\tag{7}$$ and also notice that the pointwise inequality $(Y_n^2 \wedge M) 1\{Y_n^2 \geq M\} \leq Y_n^2 1\{Y_n^2 \geq M\}$, which gives $$E[f_M(Y_n) 1\{Y_n^2 \geq M\}] \leq E[Y_n^2 1\{Y_n^2 \geq M\}] < \varepsilon/8.\tag{8}$$ The second term in $(6)$ requires the cancellation of $f(Y_n)$ and $f_M(Y_n)$. However, it occurs on the event $\{Y_n^2 < M\}$, so we have the pointwise equality $(Y_n^2 \wedge M) 1\{Y_n^2 < M\} = Y_n^2 1\{Y_n^2 < M\}$, and so in fact the second term in $(6)$ is zero. Applying the triangle inequality on the first term of $(6)$ and using $(7)$ and $(8)$, we find $|E[f(Y_n) - f_M(Y_n)]| < \varepsilon/4$.

The same argument as was applied to use $(4)$ in $(1)$ can be recycled to use $(4)$ in $(3)$, and estimate $|E[f_M(Y)] - E[f(Y)]| < \varepsilon/4$. Finally, we can use $(5)$ directly in $(2)$ to deduce that, for all $n \geq N$, $$|E[f(Y_n)] - E[f(Y)]| < \frac{\varepsilon}{4} + \frac{\varepsilon}{4} + \frac{\varepsilon}{2} = \varepsilon.$$ This is what we wanted, since for any centered random variable $Z$, $$E[f(Z)] = E[Z^2] = \mathrm{Var}(Z).$$

Such a result must be true, and probably under milder conditions, because one can even numerically estimate the asymptotic variance in (well-converged) Markov chains. A general statement can probably be found somewhere in Meyn & Tweedie's book on stochastic stability.

Related Question