$\lim$ vs $\liminf$ and $\limsup$ in the proof of convergence in probability implies convergence in distribution

convergence-divergencelimitslimsup-and-liminfprobability theory

I am studying the various types of convergence for random variables, in particular how convergence in probability implies convergence in distribution.

Let $(X_n)_n$ be a sequence of random variables and let us assume that $X_n$ converges in probability to $X$. After some computations, we can write:
$$F_X(t-\epsilon) – P(|X_n – X|> \epsilon) \le F_{X_n}(t) \le F_X(t+\epsilon) + P(|X_n – X|> \epsilon) $$
where $F_X(t)$ is the CDF of $X$ and $\epsilon > 0$.

At this point I don't understand why several authors (e.g. example of proof) use $\limsup\limits_{n \to \infty}$ and $\liminf \limits_{n \to \infty}$ to show that the limit of $F_{X_n}(t)$ is bounded by $F_X(t-\epsilon)$ and $F_X(t+\epsilon)$ for $n \to \infty$; in particular they write:
$$F_X(t-\epsilon) \le \lim \limits_{n \to \infty} \mbox{inf } F_{X_n}(t) \le \lim \limits_{n \to \infty} \mbox{sup } F_{X_n}(t) \le F_X(t+\epsilon)$$
and then they conclude the proof by resorting to the continuity of $F_X(t)$.

My question is, why is it not sufficient to just compute the limits of the elements of the inequality above obtaining $F_X(t-\epsilon) \le \lim \limits_{n \to \infty} F_{X_n}(t)\le F_X(t+\epsilon)$ ? In particular we can define:
$$a_n := F_X(t-\epsilon) – P(|X_n – X|> \epsilon)$$
$$b_n :=F_X(t+\epsilon)+ P(|X_n – X|> \epsilon)$$ for all $n$, then knowing that $a_n \le F_{X_n}(t) \le b_n$, from one of the comparison theorems for sequences and the converge in probability of $X_n$, we obtain:
$$\lim \limits_{n \to \infty} a_n = F_X(t-\epsilon) \le \lim \limits_{n \to \infty} F_{X_n}(t)\le \lim \limits_{n \to \infty} b_n = F_X(t+\epsilon)$$

Am I missing anything?

Thanks a lot.

Best Answer

This is because $\lim_{n}F_{X_{n}}(t)$ does not have to exist for all $t$.

Consider, for example, a random variable $X_{n}$ which takes the value $\frac{1}{n}(-1)^{n}$ almost surely. Clearly, this converges in probability to a random variable $X$ which is zero almost surely. To see this, let $\epsilon>0$ and note that for all $n\geq1/\epsilon$, $$ \mathbb{P}(\left|X_{n}-X\right|>\epsilon)=\mathbb{P}(\left|X_{n}\right|>\epsilon)=0. $$ Meanwhile, $F_{X_{n}}(0)$ is zero when $n$ is even and one when it is odd. Therefore, despite being bounded between zero and one, its limit as $n\rightarrow\infty$ does not exist.