$L^2$ convergence over $[0,T] \times \Omega$ for all $T>0$ implies convergence a.e. for a subsequence over $[0,\infty) \times \Omega$

lebesgue-integrallp-spacesmeasure-theoryprobability theoryreal-analysis

While reading Karatzas and Shreve's Brownian Motion and Stochastic Calculus, I have a question on the extraction of a convergent subsequence from $L^2$ convergence.

My question is regarding the final sentence below.

Let $M$ be a continuous square integrable martingale and $\langle M \rangle$ be the quadratic variation process. For each $t>0$, and a measurable adapted process $X$ define
$$[X]_T^2 = E\int_0^T X_t^2 d\langle M \rangle_t.$$

Let $\mathcal{L}$ denote the set of equivalence classes of all measurable $\mathscr{F}_t$-adapted processes $X$, for which $[X]_T<\infty$ for all $T>0.$ We define a metric on $\mathcal{L}$ by $[X-Y]$, where
$$[X] = \sum_{n=1}^\infty 2^{-n} (1 \wedge [X]_n).$$

Now, in the proof of Proposition 2.6, it states that: If $X \in \mathcal{L}$ is bounded, then Lemma 2.4 guarantees the existence of a bounded sequence $\{X^{(m)}\}$ of simple processes satisfying
$$\sup_{T>0} \lim_{m\to \infty} E \int_0^T |X_t^{(m)} – X_t|^2 dt = 0.$$

It says that
from these we extract a subsequence $\{X^{(m_k)}\},$ such that the set
$$\{(t,\omega)\in [0,\infty)\times \Omega; \lim_{k \to \infty} X_t^{(m_k)}(\omega) \neq X_t (\omega)\}$$ has product measure zero.

I know that $L^2$ convergence implies convergence a.e. for a subsequence. But here, we have the $L^2$ space on the product space $[0,\infty) \times \Omega$, and we are given that for each $T>0$, $X^{(m)}$ converges in $L^2([0,T] \times \Omega)$ to $X$. So how do we get a convergent subsequence that works for all $(t,\omega) \in [0,\infty) \times \Omega$?

Best Answer

Just to write it out: So you know that $X^{(m)}$ converges to $X$ in $L^2([0,T]\times \Omega)$ for every $T>0$. Hence, let $X^{(m,1)}$ be a subsequence of $X^{(m)}$ converging a.e. on $[0,1]\times \Omega$ and recursively, let $X^{(m,n+1)}$ be a subsequence of $X^{(m,n)}$ which converges to $X$ a.e. on $[0,n+1]\times \Omega$, which, of course, we can do since $X^{(m,n)}$ converges to $X$ in $L^2([0,n+1]\times \Omega)$.

Now, we simply note that $X^{(m,m)}$ is a subsequence of $X^{(m)}$ which converges to $X$ a.e. This follows, since the tail $(X^{(m,m)})_{m\geq n}$ is a subsequence of $(X^{(m,n)})_{m\in \mathbb{N}}$ for every $n$ and hence, $X^{(m,m)}$ converges to $X$ a.e. $[0,n]\times \Omega$ for every $n$.

This proves the desired.

If you don't know why this is called a diagonal argument, try writing up the $X^{(m,n)}$ in a big square and note that $X^{(m,m)}$ is exactly the sequence on the diagonal.

Related Question