Convergence in $L^1$ implies there exists a subsequence that converges almost everywhere.

analysisconvergence-divergencemeasurable-functionsmeasure-theoryreal-analysis

In Folland, Theorem 2.32 states

If $f_n \to f$ in $L^1$ then there is a subsequence $\{f_{n_j}\}$ such that $f_{n_j} \to f$ almost everywhere.

The proof given just says to "combine propositions 2.29 and 2.30" which say

(2.29) If $f_n \to f$ in $L^1$ then $f_n \to f$ in measure.

and

(2.30) Suppose $\{f_n\}$ is Cauchy in measure then there is a measureable $f$ such that $f_n \to f$ in measure and there is a subsequence $f_{n_j}$ that converges to $f$ almost everywhere. Moreover if $f_n \to g$ in measure then $f =g$ almost everywhere.

I'm unsure how how this is implied? I think part of this is because I don't fully understand 2.30. I know that given $f_n \to f$ in L^1 then by 2.29 we know that $f_n \to f$ in measure. 2.30, however, deals with a sequence that is Cauchy in measure. However, in general convergence in measure does not imply Cauchy in measure, correct?

Best Answer

Convergence in measure implies Cauchy in measure.

Let our measure be $\mu$. if $f_n \to f$ in measure, then given $\epsilon > 0$, we have $\mu(\{x : |f_n(x) - f(x)| > \epsilon/2\})\to 0$ as $n \to \infty$. But as $|f_n(x) - f_m(x)| \leq |f_n(x) - f(x)| + |f(x) - f_m(x)|$, we have $$ \{x: |f_n(x) - f_m(x)| > \epsilon\} \subseteq \{x: |f_n(x) - f(x)| > \epsilon/2\} \cup \{x : |f(x) - f_m(x)| > \epsilon/2\}.$$ Monotonicity of measure then gives that $\mu(\{x: |f_n(x) - f_m(x)| > \epsilon\})$ is arbitrarily small as $n,m \to \infty$. Concretely: given $\delta > 0$, there exists $N$ such that if $n \geq N$, then $\mu(\{x : |f_n(x) - f(x)|> \epsilon/2\}) < \delta/2$. For this $N$, if $n,m \geq N$, then also $\mu(\{x: |f_n(x) - f_m(x)| > \epsilon\}) < \delta$.