Prove that $M_{n , k} = \frac{1}{n} \sum_{i = 1}^{n} (X_{i} – \bar{X_{n}})^{k} \overset{p}\longrightarrow \mu_{k}$

convergence-divergenceprobability theorystatistics

I want to prove that $M_{n , k} = \frac{1}{n} \sum_{i = 1}^{n} (X_{i} – \bar{X_{n}})^{k} \overset{p}\longrightarrow \mu_{k}$

Where $X_{1} , X_{2} , … , X_{n}$ are i.i.d with expected value $E(X_{i}) = \mu$

Also let the k-th central moment be

$$\mu_k = E(X – \mu)^{k}$$

A natural estimator is the k-th sample moment,given by

$$M_{n , k} = \frac{1}{n} \sum_{i = 1}^{n} (X_{i} – \bar{X_{n}})^{k}$$

I know that for $k = 2$ due to the law of large numbers and the fact that $g(x) = x^{2}$ is a continuous function. That

$$M_{n , 2} = \frac{1}{n} \sum_{i = 1}^{n}(X_{i} – \bar{X_{n}})^{2} = \frac{1}{n} \sum_{i = 1}^{n} X_{i}^{2} – \bar{X_{n}}^{2} \overset{p}\longrightarrow \mu_{2}$$

But how do I prove this for any k since.

$$M_{n , K} = \frac{1}{n} \sum_{i = 1}^{n}(X_{i} – \bar{X_{n}})^{K} \ne \frac{1}{n} \sum_{i = 1}^{n} X_{i}^{K} – X_{n}^{k}$$

Best Answer

Without loss of generality, we can take $E(X_i)=\mu=0$ (why?).

Simply expand the sum using binomial theorem:

$$\frac1n\sum_{i=1}^n (X_{i} - \overline X_n)^k =\frac1n\sum_{i=1}^n X_i^k-\binom{k}{1}\overline X_n\cdot\frac1n\sum_{i=1}^n X_i^{k-1}+\binom{k}{2}\overline X_n^2\cdot \frac1n\sum_{i=1}^n X_i^{k-2}+\cdots+(-1)^{k}\overline X_n^{k} $$

By law of large numbers, $\overline X_n\stackrel{P}\to 0$ and more generally, $\frac1n\sum\limits_{i=1}^n X_i^j\stackrel{P}\to E(X_i^j)$ for $j=1,\ldots,k$.

Now use the following theorem:

If $(U_n)$ and $(V_n)$ are two sequences of random variables such that $U_n\stackrel{P}\to u$ and $V_n\stackrel{P}\to v$, then $U_n+V_n\stackrel{P}\to u+v$ and $U_nV_n\stackrel{P}\to uv$.

So the first term in $\frac1n\sum\limits_{i=1}^n (X_{i} - \overline X_n)^k$ converges in probability to $\mu_k$, the remaining terms converge in probability to $0$.

Related Question