Some doubts in Brownian motion quadratic variation proof

brownian motionlimitsprobability theoryquadratic-variationstochastic-processes

I quote Schilling, Partzsch (2012)

Theorem Let $(B_t)_{t\ge0}$ be a one-dimensional Brownian motion and $(\Pi_n)_{n\ge 1}$ be any sequence of finite partitions of $[0,t]$ satisfying $\lim\limits_{n\to\infty}|\Pi_n|=0$. Then the mean-square limit exists:
$$\text{var}_2(B;t)=L^2(\mathbb{P})-\lim\limits_{n\to\infty}S_2^{\Pi_n}(B;t)=t\tag{1}$$
where $S_2^{\Pi}(B;t)=\sum_{t_{j-1}, t_j\in\Pi}|B(t_j)-B(t_{j-1})|^2$ and $\text{var}_2$ is the quadratic variation of a Brownian motion.

In the proof of the above theorem, firs it is given that $\Pi=\{t_0=0<t_1<\ldots<t_n\le t\}$ is some partition of $[0,t]$. Then, at a certain point it is shown that:

$$\begin{align}\mathbb{E}\bigg[(S_2^{\Pi}(B;t)-t)^2\bigg]&=\sum_{j=1}^{n}\mathbb{E}\bigg[\left(B(t_j-t_{j-1})^2-(t_j-t_{j-1})\right)^2\bigg]\\&\color{red}{=}\sum_{j=1}^{n}(t_j-t_{j-1})^2\mathbb{E}\bigg[(B(1)^2-1)^2\bigg]\\&\color{red}{\le}2|\Pi|\sum_{j=1}^{n}(t_{j}-t_{j-1})=2|\Pi|t\underbrace{\rightarrow}_{\color{red}{|\Pi|\to 0}}0\end{align}$$


I cannot really understand the three parts in $\color{red}{\text{ red }}$ above.

  1. Why $\sum_{j=1}^{n}\mathbb{E}\bigg[\left(B(t_j-t_{j-1})^2-(t_j-t_{j-1})\right)^2\bigg]\color{red}{=}\sum_{j=1}^{n}(t_j-t_{j-1})^2\mathbb{E}\bigg[(B(1)^2-1)^2\bigg]$?;
  2. Why $\sum_{j=1}^{n}(t_j-t_{j-1})^2\mathbb{E}\bigg[(B(1)^2-1)^2\bigg]\color{red}{\le}2|\Pi|\sum_{j=1}^{n}(t_{j}-t_{j-1})$?;
  3. What does it mean to "take limit as $|\Pi|\to0$"? Isn't $\Pi$ just a partition of $[0,t]$? What does it mean to "make it go to $0$"? Does it mean that partition mesh becomes smaller and smaller?

Best Answer

$|\Pi|$, sometimes called the norm of the partition $\pi$, is the largest length of any subinterval in $\Pi$. That is, if $\Pi = \{t_{0},t_{1},\dots,t_{N}\}$ with $a = t_{0} < t_{1} < \dots < t_{N} = b$, then \begin{equation*} |\Pi| = \max_{i} \{t_{i + 1} - t_{i} \, \mid \, i \in \{0,\dots,N-1\}\} \end{equation*}

The assumption states that $\lim_{n \to \infty} |\Pi_{n}| = 0$ and the key point in the proof is that the error we are interested in is on the order of $|\Pi|$. Hence if we are in the regime where $|\Pi| \to 0$ (i.e. smaller and smaller $|\Pi|$), then the error vanishes. Of course, this is exactly what we are assuming about $\{\Pi_{n}\}_{n \in \mathbb{N}}$ as $n \to \infty$.

By scaling, $\mathbb{E}[(B(t_{j} - t_{j- 1})^{2} - (t_{j} - t_{j-1}))^{2}] = (t_{j} - t_{j-1})^{2} \mathbb{E}[(B(1)^{2} - 1)^{2}]$. (This uses the scaling property of Brownian motion: $B(t)$ has the same distribution as $\sqrt{t} B(1)$.)

Finally, $(t_{j} - t_{j-1})^{2} \leq |\Pi| (t_{j} - t_{j-1})$ by definition of $|\Pi|$. Presumably $\mathbb{E}[(B(1)^{2} - 1)^{2}] \leq 2$ holds. (This can be expanded into $\mathbb{E}[B(1)^{4} - 2 B(1)^{2} + 1)$ and then Wikipedia should tell you $\mathbb{E}[B(1)^{4}]$.) The $2$ isn't important. It's some constant that can't compete with $|\Pi|$ in the limit $n \to \infty$.

Related Question