Probability – Alternate Proof of Levy’s Characterisation of Brownian Motion

brownian motionmartingalespr.probabilitystochastic-processes

Levy’s characterisation theorem for Brownian motion states that for a local martingale $X$ with $X_0 = 0$, $X$ is a Brownian motion if and only if it has quadratic variation $\langle X, X \rangle_t = t$.

The usual proofs of this fact use the characteristic function of the normal distribution. I am seeking an alternate proof in order to improve my intuition about the theorem.

Question: Is there a proof of this that does not go through the characteristic function?

Best Answer

Fix $A>0$; we will be proving that $X_t\stackrel{\mathcal D}= B_t$ on $[0;A].$ Let $T^{(m)}$ be an a.s. increasing sequence of stopping times such that $T^{(m)}\to\infty$ almost surely, and for each $m$, $X_{n\wedge T^{(m)}}$ is a martingale. Then, $Y_n=X_{\epsilon n\wedge T^{(m)}}$, $n=0,1,\dots,$ is a discrete-time martingale satisfying $$\mathbb{E}((Y_{n+1}-Y_n)^2|\mathcal{F}_{\epsilon n})=\mathbb{E}(\epsilon\wedge(T^{(m)}-\epsilon n)_+|\mathcal{F}_{\epsilon n})=:\epsilon_n.$$

Put $t_n:=\sum_{i=0}^{n-1}\epsilon_n$. Note that $0\leq\epsilon_n\leq \epsilon$ almost surely, hence $\epsilon n-t_n$ is non-negative and increasing. We infer that for $N=[A\epsilon^{-1}]$, $$\mathbb{P}(\max_{n\leq N}|\epsilon n-t_n|\geq \delta)=\mathbb{P}(\epsilon N-t_N\geq \delta)\leq \delta^{-1}\mathbb{E}(\epsilon N-t_N)=\delta^{-1}\mathbb{E}((N\epsilon-T^{(m)})_+)\leq\delta^{-1}\mathbb{E}((A+1)-T^{(m)})_+).$$ By monotone convergence, the right-hand side tends to zero as $n\to\infty,$ note that it also does not depend on $\epsilon.$ Therefore, given $\delta>0$, we can choose $m$ such that for all $\epsilon$, $$ \mathbb{P}(\max_{n\leq N}|\epsilon n-t_n|\geq \delta)\leq \delta. $$

By Skorokhod embedding theorem [A. V. Skorohod, Studies in the theory of random processes, Addison-Wesley, Reading, Mass.,1965], there exists a Brownian motion $\{B_t\}_{t\geq 0}$, which we can take independent of $\{X_t\}_{t\geq 0}$, and a sequence of stopping times $0=\tau_0<\tau_1<\tau_2<\dots$ such that $\{B_{\tau_n}\}_{n\geq 0}$ has the same distribution as $\{Y_n\}_{n\geq 0}.$ Since both $B_t$ and $X_t$ are continuous, it is enough to show that $\max_{n\leq A\epsilon^{-1}}|\tau_n - n\epsilon|\to 0$ in probability as $\epsilon\to 0$ and then $m\to\infty$. In view of the above discussion, it is enough to show that for any fixed $m$, $\max_{n\leq A\epsilon^{-1}}|\tau_n - t_n|\to 0$ in probability as $\epsilon\to 0$.

Recall that the construction in Skorokhod embedding theorem runs iteratively: conditionally on $\mathcal{F'_n}=\sigma(\mathcal{F}_{\epsilon n},\tau_n, \{B_{t}\}_{0\leq t\leq\tau_n})$, we construct $\tau_{n+1}-\tau_{n}$ as a stopping time for the Brownian motion $B_{\tau_n+t}-B_{\tau_n}$ such that $B_{\tau_{n+1}}-B_{\tau_n}$ has the same distribution as the conditional distribution of $Y_{n+1}-Y_{n}$. This implies:

$$\mathbb{E}(\tau_{n+1}-\tau_n|\mathcal{F}'_n)=\mathbb{E}(B_{\tau_{n+1}}^2-B_{\tau_n}^2|\mathcal{F}'_n)=\mathbb{E}(Y_{n+1}^2-Y_{n}^2|\mathcal{F}'_n)=\epsilon_n.$$ That is, $\tau_n-t_n$ is in fact a martingale. In particular, by Doob's inequality, we have $$\mathbb{P}(\max_{n\leq N}|\tau_n-t_n|>\delta)\leq\delta^{-2}\mathbb{E}(\tau_N-t_N)^2,$$ which we will apply to $N=[A\epsilon^{-1}]$. In fact, Skorokhod embedding theorem guarantees that $\mathbb{E}(\tau_{n+1}-\tau_n)^2\leq C\mathbb{E}(Y_{n+1}-Y_{n})^4$, thus $$ \mathbb{E}(\tau_{N}-t_N)^2\leq \sum_{n=0}^{N-1}\mathbb{E}(\tau_{n+1}-\tau_n)^2\leq \sum_{n=0}^{N-1}\mathbb{E}(Y_{n+1}-Y_{n})^4. $$

It remains to show that the right-hand side tends to zero as $\epsilon\to 0$. This is well-known for bounded, continuous local martingale, see Karatzas-Shreve, Lemma 5.10. We can reduce to this case by localization: since $X_t$ is a.s. bounded on any finite interval, we have that $\hat{T}^{(m)}:=\min\{t:|X_t|\geq m\}\to \infty$ almost surely, we can replace $T^{(m)}$ by $\hat{T}^{(m)}\wedge T^{(m)}$.