[Math] Quadratic variation and predictable quadratic variation for martingales

martingalespr.probabilitystochastic-calculusstochastic-processes

Let $(M_{t})_{0\le t\le 1}$ be a continuous martingale with respect to the filtration $(\mathcal{F}_{t})_{0\le t\le 1}$. Assume that $E M_1^2<\infty$.

Fix $N$ and consider now a discrete version of this martingale, i.e., the process $(M _{n/N})_{0\le n\le N}$. Then the quadratic variation of this disrete martingale is
$$
[M^N]=\sum_{k=0}^{N-1} (M_{\frac{k+1}{N}}-M_{\frac{k}{N}})^2
$$
and its predictable quadratic variation (i.e., a unique increasing predictable process starting at zero such that $M^2 − \langle M\rangle $ is a martingale) is given by
$$
\langle M^N\rangle=\sum_{k=0}^{N-1} E\bigl((M_{\frac{k+1}{N}}-M_{\frac{k}{N}})^2|\mathcal{F}_{\frac{k}{N}}\bigr).
$$

Clearly, as $N\to\infty$ we have
$$[M^N]\to [M].$$
My question is, is it also true that
$$\langle M^N\rangle\to \langle M\rangle?$$

For example, for a Brownian motion $W$ we have $[W]_t=\langle W\rangle_t=t$ (because $W_t^2-t$ is a martingale). We also have

$$
\lim\sum_{k=0}^{N-1}(W_{\frac{k+1}{N}}-W_{\frac{k}{N}})^2=t.
$$
and
$$
\lim\sum_{k=0}^{N-1} E\bigl((W_{\frac{k+1}{N}}-W_{\frac{k}{N}})^2|\mathcal{F}_{\frac{k}{N}}\bigr)=\lim\sum_{k=0}^{n-1} \bigl(\frac{k+1}{N}-\frac{k}{N}\bigr)=t.
$$

So the question is, is it true for any continuous martingale, or just for the Brownian motion?

Best Answer

It is true that $\langle M\rangle^n_1 \to \langle M\rangle_1$ in $L^1$ when $M$ is a continuous square-integrable martingale. In fact, the result holds even if $M$ is cadlag, as long as $\langle M\rangle$ is continuous.

Indeed, $M^2$ is then a submartingale of class D and so, since

$$E[(M_{t_{i+1}} - M_{t_{i}})^2 |\mathcal{F}_{t_i}]=E[M_{t_{i+1}}^2 - M_{t_{i}}^2 |\mathcal{F}_{t_i}],$$

the result follows from the analogous results for the predictable component of the Doob decomposition of a submartingale of class D, which was proved in

Doléans, Catherine. "Existence du processus croissant naturel associé à un potentiel de la classe (D)." Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 9, no. 4 (1968): 309-314.

and which constitutes Theorem 31.2, chapter 6 of

Rogers, L. Chris G., and David Williams. Diffusions, Markov processes and martingales: Volume 2, Itô calculus. Vol. 2. Cambridge university press, 2000.

When $M$ is cadlag (and $\langle M\rangle$ is not continuous), $\langle M\rangle^n_1$ converges to $\langle M\rangle_1$, but only in the $\sigma\left(L^{1}, L^{\infty}\right)$-topology, see for example

Rao, K. Murali. "On decomposition theorems of Meyer." Mathematica Scandinavica 24, no. 1 (1969): 66-78.

In this case, it follows from Mazur's lemma that one can obtain (strong) convergence in $L^1$ by replacing $(\langle M\rangle^n_1)_n$ with a forward convex combinations thereof. In fact, taking forward convex combinations, one can obtain convergence in $L^1$ simultaneously at all times $t\in [0,1]$, as I showed in

Siorpaes, Pietro. "On a dyadic approximation of predictable processes of finite variation." Electronic Communications in Probability 19 (2014): 1-12.

Related Question