$\newcommand{\E}{\mathbb{E}}$$\newcommand{\P}{\mathbb{P}}$The Central Limit Theorem (CLT) states that for $X_1,X_2,\dots$ independent and identically distributed (iid) with $\E[X_i]=0$ and $\operatorname{ Var} (X_i)<\infty$,
the sum converges to a normal distribution as $n\to\infty$:
$$
\sum_{i=1}^n X_i \to N\left(0, \sqrt{n}\right).
$$
Assume instead that $X_1,X_2,\dots$ form a finite-state Markov chain with a stationary distribution $\P_\infty$ with expectation 0 and bounded variance.
Is there a simple extension of CLT for this case?
The papers I've found on CLT for Markov Chains generally treat much more general cases. I would be very grateful for a pointer to the relevant general result and an explanation of how it applies.
Best Answer
Alex R.'s answer is almost sufficient, but I add a few more details. In On the Markov Chain Central Limit Theorem – Galin L. Jones, if you look at theorem 9, it says,
For finite state spaces, all irreducible and aperiodic Markov chains are uniformly ergodic. The proof for this involves some considerable background in Markov chain theory. A good reference would be Page 32, at the bottom of Theorem 18 here.
Hence, the Markov chain CLT would hold for any function $f$ that has a finite second moment. The form the CLT takes is described as follows.
Let $\bar{f}_n$ be the time averaged estimator of $E_{\pi}[f]$, then as Alex R. points out, as $n \to \infty$, $$\bar{f}_n = \frac{1}{n} \sum_{i=1}^n f(X_i) \overset{\text{a.s.}}{\to} E_\pi[f].$$
The Markov chain CLT is $$\sqrt{n} (\bar{f}_n - E_\pi[f]) \overset{d}{\to} N(0, \sigma^2), $$
where $$\sigma^2 = \underbrace{\operatorname{Var}_\pi(f(X_1))}_\text{Expected term} + \underbrace{2 \sum_{k=1}^\infty \operatorname{Cov}_\pi(f(X_1), f(X_{1+k}))}_\text{Term due to Markov chain}. $$
A derivation for the $\sigma^2$ term can be found on Page 8 and Page 9 of Charles Geyer's MCMC notes here