Proof of Blumenthal’s 0-1 law for Brownian Motion

brownian motionprobabilityprobability theorystochastic-calculusstochastic-processes

I am currently reading the book "Brownian Motion, Martingales, and Stochastic calculus" by Jean-François Le Gall and am stuck at understanding the proof of Blumenthal's 0-1 law for Brownian Motion.

The Setup is the following: Assume we have a one-dimensional Brownian $(B_t)_{t \geq 0}$ on some probability space $(\Omega, \mathcal{F}, P)$. For $t \geq 0$ let $\mathcal{F}_t= \sigma(B_s: 0 \leq s \leq t)$ and define $\mathcal{F}_{0+}= \cap_{\epsilon > 0} \mathcal{F}_\epsilon$. Then the following theorem holds.

Theorem: The sigma-algebra $\mathcal{F}_{0+}$ is trivial in the sense that for all $A \in \mathcal{F}_{0+}$, the probability of $A$ is either $0$ or $1$.

I will outline the proof that can be found in the book with the part I do not understand:

Proof outline: Using $\cap$-stable generators, it will be enough to proof that for all $n \in \mathbb{N}$ and $0 < t_1 < … < t_n$ the sigma-algebra $\mathcal{F}_{0+}$ and $\sigma(B_{t_1},…,B_{t_n})$ are independent. Now let $g: \mathbb{R}^n \rightarrow \mathbb{R}$ be bounded and continuous and $A \in \mathcal{F}_{0+}$. By continuity and dominated convergence we can write for $0 < \epsilon < t_1$,

$$
\begin{align}
E [1_A g(B_{t_1},…,B_{t_n})] = \lim_{\epsilon \rightarrow 0} E [1_A g(B_{t_1} – B_\epsilon,…,B_{t_n}- B_\epsilon)].
\end{align}
$$

Now by the simple Markov property of Brownian Motion, $(B_{t+\epsilon}-B_\epsilon)_{t \geq 0}$ is independent of $\mathcal{F}_\epsilon$ and thus we can continue to write the above to

$$
\begin{align}
\lim_{\epsilon \rightarrow 0} E [1_A g(B_{t_1} – B_\epsilon,…,B_{t_n}- B_\epsilon)] &= P(A) \lim_{\epsilon \rightarrow 0} E[g(B_{t_1} – B_\epsilon,…,B_{t_n}- B_\epsilon)] \\
&= P(A) E[g(B_{t_1},…,B_{t_n})].
\end{align}
$$

This then should be enough to conclude independence. I do not know why this should suffice. If $g$ was allowed to be measurable, then it would be clear. But how does independence of $\mathcal{F}_{0+}$ and $\sigma(B_{t_1},…,B_{t_n})$ follow from $g$ only being bounded continuous?

Thanks a lot in advance!

Best Answer

Approach I (via characteristic functions): The identity $$E(1_A g(B_{t_1},\ldots,B_{t_n})) = P(A) E(g(B_{t_1},\ldots,B_{t_n})) \tag{1}$$ can be easily extended to complex-valued continuous functions (just write $g= \text{Re} g + i \, \text{Im g}$ and apply $(1)$ separately to the real and imaginary part of $g$). Choosing $$g(x_1,\ldots,x_n) := \exp \left( i \sum_{j=1}^n \xi_j x_j \right)$$ for some fixed $\xi=(\xi_1,\ldots,\xi_n) \in \mathbb{R}^n$ we find that $$E \left( 1_A \exp \left[ i \sum_{j=1}^n \xi_j B_{t_j} \right] \right) = P(A) E \exp \left( i \sum_{j=1}^n \xi_j B_{t_j} \right)$$ for all $A \in \mathcal{F}_{0+}$ and $\xi \in \mathbb{R}^n$. This implies by Kac's lemma that $\mathcal{F}_{0+}$ and $\sigma(B_{t_1},\ldots,B_{t_n})$ are independent, see the lemma here for details.

Approach II (via monotone class argument): For any open set $U \subseteq \mathbb{R}^n$ there exists a sequence of continuous bounded functions $(g_k)_{k \in \mathbb{N}}$ such that $g_k \uparrow 1_U$. Using $(1)$ and applying the monotone convergence theorem, it follows that \begin{align*} E(1_A 1_U(B_{t_1},\ldots,B_{t_n})) &= \sup_{k \geq 1} E(1_A g_k(B_{t_1},\ldots,B_{t_n})) \\ &= P(A) \sup_{k \geq 1} E(g_k(B_{t_1},\ldots,B_{t_n})) \\ &= P(A) E(1_U(B_{t_1},\ldots,B_{t_n})) \end{align*} for $A \in \mathcal{F}_{0+}$. Since the open sets are a generator of the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R}^n)$, an application of the monotone class theorem yields that $$E(1_A 1_F (B_{t_1},\ldots,B_{t_n})) = P(A) E(1_F(B_{t_1},\ldots,B_{t_n})), \qquad A \in \mathcal{F}_{0+}$$ for any $F \in \mathcal{B}(\mathbb{R}^n)$. Hence, $\mathcal{F}_{0+}$ and $\sigma(B_{t_1},\ldots,B_{t_n})$ are independent.