“Central limit theorem” for the inverse of a sum of random variables

central limit theoremprobability distributionsprobability theoryprobability-limit-theorems

The classical central limit theorem states that, given some i.i.d. random variables $X_n$ for $n=1…N$, the random variable $Z_N = \sum_{n=1}^N X_n$ is distributed according to a normal distribution in the limit of large $N$.

Is there any similar result for the random variable $Y_N = 1/Z_N$ ? Is it sufficient to find the PDF for the variable $Y_N$ as $p_{Y_N}(y) = y^{-2} p_{X_N}(1/y)$ ? If this is the case then the resulting PDF does not seem to have a defined expected value and variance. Would it help, maybe, to consider the RV $N/Z_N$ instead of $Y_N$ ?

Best Answer

Hint: Delta Method

let we know by central limit theorem $$\sqrt{n}(\bar{X}-\mu_{x})\rightarrow N(0,\sigma^2)$$

By using Taylor series we get $$\sqrt{n}\left(h(\bar{X})-h(\mu_{x})\right)=\sqrt{n} (\bar{X}-\mu_{x})h^{\prime}(\mu_{x}) +\sqrt{n} (\bar{X}-\mu_{x})^2\frac{h^{\prime \prime}(\mu_{x})}{2!}+\cdots$$

lets we use an approximation like (if $h^{\prime}(\mu_{x})\neq 0$)

$$\sqrt{n}\left(h(\bar{X})-h(\mu_{x})\right)\cong \color{red}{\sqrt{n} (\bar{X}-\mu_{x})}h^{\prime}(\mu_{x})\rightarrow N\left(0,\sigma^2 \left(h^{\prime}(\mu_{x})\right)^2\right) $$

Now just $h(t)=\frac{1}{t}$