Let $X \sim N(0,1)$, and let $Y_n$ be random variable such that $ P(Y_n = n) = \frac{1}{n} $ and $P(Y_n = 0) = 1 – \frac{1}{n}$. What does $Z_n = Y_n + X$ converge in probability to?
-
I can see that $Y_n$ converges in probability to $0$, which makes $Z_n$ converges to $N(0,1)$ in distribution by Slutsky's theorem, but I don't know if we can find the probability limit of $Z_n$
-
I think that $E[Z_n] = 1$ and $Var(Z_n) = n$ (?). However, if my reasoning above is correct, the limiting distribution is $N(0,1)$. What makes this difference?
Best Answer
You've made some good observations.
But actually it is not hard to see that $Z_n$ converges to $X$ in probability, by direct computation : $$\mathbb P\left(|Z_n - X|>\varepsilon\right) = \mathbb P\left(|Y_n|>\varepsilon\right) = \frac1n \to 0$$
In our case, $(Z_n)$ converges to $X$ in probability, but as you observed, not in expectation. Hence we deduce that $Z_n$ is not uniformly integrable (I invite you to check with the definition that $Z_n$ is indeed not u.i.).
More intuitively, uniformly integrable families of random variables are such that "most of their mass" is contained in a compact (bounded) set, hence they are "better behaved" than simply integrable random variables, as your exercises illustrates well.