Solved – Convergence to normal distribution

central limit theoremconvergencenormal distributionpoisson distributionself-study

Let $X_{1},X_{2},…$ be independent random variables such that
$X_{k}$ is Po(k)-distributed for k=1,2… Show that:

$$Z_{n}=\frac{1}{n}\sum_{k=1}^{n}\left(X_{k}-\frac{n^{2}}{2}\right)$$

converges to a N($\frac{1}{2}$,$\frac{1}{2}$) distribution as $n \to
> \infty $

I know that $X_{k} \overset{\underset{\mathrm{d}}{}}{=} \sum_{j=1}^{k}Y_{j,k}$ for every $k\geq1$, are independent Po(1)-distributed random variables.

My two immediate thoughts to solving this problem involve the central limit theorem OR characteristic function convergence. However, I'm not sure if the central limit theorem can be used since the RV's are not i.i.d. All examples that I've seen in the past have involved converging to a normal distribution with 0 mean.

Best Answer

You are quite close to solving the problem:

Using the representation $X_k=\sum_{j=1}^k Y_{kj}$ where $Y_{ij}\stackrel{\text{iid}}{\sim}\mathcal{P}(1)$, you have $$\sum_{k=1}^n X_k= \sum_{k=1}^n\sum_{j=1}^k Y_{kj} = \sum_{u=1}^{n(n+1)/2} \xi_{u}$$where $\xi_u\stackrel{\text{iid}}{\sim}\mathcal{P}(1)$ $(u=1,\ldots,n(n+1)/2)$. Therefore, if you normalise the above sum, you get $$\eqalign{\dfrac{\sum_{k=1}^n X_k-\mathbb{E}[\sum_{k=1}^n X_k]}{\text{var}(\sum_{k=1}^n X_k)^{1/2}} &=\dfrac{\sum_{u=1}^{n(n+1)/2} \xi_{u}-\mathbb{E}[\sum_{u=1}^{n(n+1)/2} \xi_{u}]}{\text{var}(\sum_{u=1}^{n(n+1)/2} \xi_{u})^{1/2}}\\ &=\dfrac{\sum_{u=1}^{n(n+1)/2} \xi_{u}-\frac{n(n+1)}{2} }{(n(n+1)/2)^{1/2}}\\ &=\sqrt{2}\,\dfrac{\sum_{k=1}^n X_k-\frac{n^2}{2}-\frac{n}{2}}{n(1+n^{-1})^{1/2}}\\ &=\sqrt{2}\,\dfrac{\frac{1}{n}\sum_{k=1}^n \left[X_k-\frac{n^2}{2}\right]-\frac{1}{2}}{(1+n^{-1})^{1/2}}\\ &=\sqrt{2}\,\dfrac{Z_n-\frac{1}{2}}{(1+n^{-1})^{1/2}}\\ }$$ which should help you conclude, along with a CLT on the above.

The theoretical references for a Central Limit Theorem for independent but not i.i.d. random variables are Liapounov's and Lindeberg's versions of the CLT. The former requires moments of order $2+\epsilon$ with $\epsilon>0$ and the latter for vanishing tail second moments. Both conditions apply for Poisson variates.

That the result holds (and hence that there is no mistake in the formulation) can be checked by a quick R experiment, as illustrated by the following that compares an histogram of 10³ $Z_n$'s with the $\text{N}(1/2,1/2)$ density:

enter image description here

Related Question