Proofs and Understanding of Central Limit Theorem – How to Learn

asymptoticscentral limit theoremmathematical-statisticsprobability

I know there are different versions of the central limit theorem and consequently there are different proofs of it. The one I am most familiar with is in the context of a sequence of identically distributed random variables, and the proof is based on an integral transform (eg. characteristic function, moment generating function), followed by first order approximations to obtain a function to which the inverse transform can be applied.

I am interested to know if there are any flaws in this approach – I have been told informally that it is not completely rigorous – but why ?

Best Answer

As I recall in this version the random variables are independent with finite variances but the variance need not all be the same. The CLT result holds under a somewhat complicated condition called the Lindeberg condition and the traditional proofs use transform methods.
But the proof we learned was probabilistic. It involved splitting the sum into two pieces. One piece converged to N(0,1) in distribution and the other converge to 0 in probability. This technique was used because it was much easier to show the first sum satisfied the CLT. But the fact that the second sum was negligible was harder. The following link gives an interesting paper by Larry Goldstein that give a probabilistic proof of the Linderberg Feller Theorem that is very similar or the same. It also may be of interest to the OP because it includes some history on the CLT. http://bcf.usc.edu/~larry/papers/pdf/lin.pdf