Convergence In Distribution vs Probability Explanation

probabilitystatistics

Having trouble following lecture notes:

Sequence of random variables, $ X_1, …, X_n,…$ with corresponding c.d.f's $F_1,…,F_n,…$ converges in distribution to a random variable X with c.d.f $F$ if:
$$
\lim _{n \rightarrow \infty} F_{n}(x)=F(x)
$$

for all $x$ where $F$ is continuous.

To see that convergence in distribution in general does not imply convergence in probability let $X_1, X_2,…$ be a sequence of i.i.d. random variables all with cdf $F=\Phi$. Also let $X$ be a random variable that is independent of these and has the same standard normal distribution.
$$
X_{n} \stackrel{d}{\longrightarrow} X
$$

but for all $ \epsilon \gt 0$,
$$
\operatorname{Pr}\left(\left|X_{n}-X\right|<\varepsilon\right)=2 \Phi\left(\frac{\varepsilon}{\sqrt{2}}\right)-1
$$

and this does not converge to 1.

I'm very rusty in probability theory and rather confused by this. Where did the expression, $2 \Phi\left(\frac{\varepsilon}{\sqrt{2}}\right)-1$, come from? And why is he testing for convergence at 1? From previous notes, the convergence is explained to be 0: $
\lim _{n \rightarrow \infty} \operatorname{Pr}\left(\left|X_{n}-X\right|>\varepsilon\right)=0
$

Best Answer

  1. Let $Z_n \equiv X_n - X \sim N(0,2)$, hence \begin{align} P(|X_n - X| < \epsilon)& = P(|Z_n|< \epsilon)= P( - \epsilon < Z_n < \epsilon)\\ & = P(Z_n < \epsilon) - ( 1 - P(Z_n < \epsilon) )\\ &=2\Phi(\epsilon/\sqrt{2})-1 \end{align} .

  2. And why is he testing for convergence at 1? $$ P(|X_n - X| < \epsilon)=1 \iff P(|X_n - X| \ge \epsilon) = 0 $$

Formally,

$$ 1 = P(|X_n - X| < \epsilon) + P(|X_n - X| = \epsilon) + P(|X_n - X| > \epsilon) $$ you can drop the strict equality since it is continues r.v

Related Question