I'm reading a book about measure theory and probability (first chapter of Durret's Probability book), and it's starting to switch between the terms "a.e." and "a.s." in different contexts. I'm becoming confused about their meanings. What's the difference between almost everywhere and almost sure?
[Math] almost everywhere Vs. almost sure
almost-everywheremeasure-theoryprobability
Related Solutions
This is one of those things that is helpfully studied using an example. A very nice example for this issue is the "wandering block". Informally, the wandering block is the sequence of indicator functions of $[0,1],[0,1/2],[1/2,1],[0,1/4],[1/4,1/2],[1/2,3/4],[3/4,1]$, etc. More explicitly, it is the sequence $g_n(x)$ which comes about from enumerating the "triangular array" $f_{j,k}(x)=\chi_{[j2^{-k},(j+1)2^{-k}]}(x)$, where $k=0,1,\dots$ and $j=0,1,\dots,2^{k}-1$.
The sequence $g_n$ converges in measure to the zero function. You can see this as follows. Given $n$, write $g_n=f_{j,k}$, then $m(\{ x : |g_n(x)| \geq \varepsilon \})=2^{-k}$ for any given $\varepsilon \in (0,1)$. Since $k \to \infty$ as $n \to \infty$, this measure goes to $0$ as $n \to \infty$.
On the other hand, the sequence $g_n(x)$ does not converge at any individual point, because any given point is in infinitely many of these intervals and also not in infinitely many of these intervals. Thus the sequence $g_n(x)$ contains infinitely many $1$s and infinitely many $0$s, and so it cannot converge.
On an infinite measure space, there is an example for the other direction: $f_n(x)=\chi_{[n,n+1]}(x)$ on the line converges pointwise to $0$ but does not converge in measure, since $m(\{ x : |f_n(x)| \geq \varepsilon \})=1$ for $\varepsilon \in (0,1)$. A corollary of Egorov's theorem says that this is impossible on a finite measure space.
On a related note, the wandering block example also shows a nice, explicit example of the theorem "if $f_n$ converges in measure then a subsequence converges almost everywhere". Here, for any fixed $j$, the sequence $h_k=f_{j,k}$ (defined for sufficiently large $k$ that this makes sense) converges almost everywhere.
This is not exactly an answer to the question as formulated, but it may clarify the precise formulation of the distinction. Convergence a.e. can be written using countable intersections and unions. First define
$$A_{n,m} = \{ x : |f_n(x) - f(x)| > 1/m \}.$$
Now, for the convergence to fail at $x$, there must be some $m$ such that $x \in A_{n,m}$ for infinitely many $n$. In general, the set $B_m$ defined by "$x \in B_m$ if and only if there are infinitely many $n$ such that $x \in A_{n,m}$" is given by
$$B_m = \bigcap_{k=1}^\infty \bigcup_{n=k}^\infty A_{n,m}.$$
Now if there must be some such $m$, then the bad set where convergence fails is
$$C=\bigcup_{m=1}^\infty \bigcap_{k=1}^\infty \bigcup_{n=k}^\infty A_{n,m}.$$
Convergence a.e. says that this set has measure zero. It is equivalent to say that
$$\mu \left ( \bigcap_{k=1}^\infty \bigcup_{n=k}^\infty A_{n,m} \right )$$
is zero for every $m$.
Best Answer
In a probability space (equipped with a probability $P$), we say that an event $\omega$ occurs almost surely if $P(\omega)=1$. On the other hand, on a measure space equipped with a measure $\mu$, we say that a property $\mathcal{P}$ is satisfied almost everywhere if the set where $\mathcal{P}$ is not satisfied has measure zero. Note that "a.s." is equivalent to "a.e." in probability spaces, since if $\omega$ occurs almost surely, then the probability that $\omega$ does not occur is zero. However, in the case of general measure spaces $X$ we cannot say that a property is satisfied almost everywhere if it is satisfied in a set of measure $\mu(X)$ (which would correspond to an event having probability $1$), since in many cases this measure is infinite. This is why in the case of measure spaces we formulate the definition of "almost everywhere" in terms of complements of sets.