Show that if $M_t$ is a Martingale then $M_t^2 – \langle M_t \rangle$ is a Martingale

martingalesstochastic-calculusstochastic-processes

$\langle M_t \rangle$ is defined as the quadratic variation, given by
$$\langle M_t \rangle := \lim_{n \to \infty} \sum_{i=1}^{n-1}(M_{t_{i+1}} – M_{t_i})^2=\int_0^t b^2 (\omega,s)ds$$

My attempt here is to say that if $M_t$ is a Martingale then it can be represented in the following form:
$$M_t = \int_0^t b(\omega,s)dBs$$
I want to assume that $M_0=0$ and so $\mathbb{E}[M_t]=M_0=0$ by assumption. I wish to show that $\mathbb{E}[M_t^2 – \langle M_t \rangle] = 0$. Then I figured we just start the calculation:
$$\mathbb{E}\left[ M_t^2 – \langle M \rangle _t \right]=\mathbb{E}\left[ \left( \int_0^t b(\omega,s)dBs \right)^2 – \int_0^t b^2(\omega,s)dB_s \right]$$
$$=\mathbb{E}\left[ \left( \int_0^t b(\omega,s)dBs \right)^2 \right] – \mathbb{E}\left[\int_0^t b^2(\omega,s)dB_s \right]$$
Now my reasoning here is that the second term is a Martingale because it is just a stochastic integral, which given $b$ is bounded then the expectation is always zero.
$$=\mathbb{E}\left[ \left( \int_0^t b(\omega,s)dBs \right)^2 \right]$$
now using Itô's isometry
$$=\mathbb{E}\left[ \left( \int_0^t b^2(\omega,s)ds \right) \right]$$

Now I'm not quite sure that I went about this the right way, and I'm thinking that this last integral is not zero and I haven't really used the property that $M_t$ is a Martingale here yet. And I going about this incorrectly?

Update

I tried again with a different technique. I feel like we can say the last line has zero expectation on account of the independent increments. Or if not then by making some upper bound in the sum which tends towards zero as $n \to \infty$?

$$\mathbb{E}\left[ M_t^2 – \langle M \rangle _t \right]=\mathbb{E}\left[ (M_t)^2 – \lim_{n \to \infty} \sum_{i=0}^{n-1} (M_{t_{i+1}} – M_{t_{i}} )^2 \right]$$
$$=\mathbb{E}\left[ \left(\lim_{n \to \infty}\sum_{i=0}^{n-1}M_{t_{i+1}} – M_{t_i} \right)^2 – \lim_{n \to \infty} \sum_{i=0}^{n-1} (M_{t_{i+1}} – M_{t_{i}} )^2 \right]$$
$$= \mathbb{E}\left[ \lim_{n \to \infty}\sum_{i=0}^{n-1} \left(M_{t_{i+1}} – M_{t_i} \right)^2 + \lim_{n \to \infty}2\sum_{i\ne j}^{n-1} \left(M_{t_{i+1}} – M_{t_i} \right)\left(M_{t_{j+1}} – M_{t_j} \right) – \lim_{n \to \infty} \sum_{i=0}^{n-1} (M_{t_{i+1}} – M_{t_{i}} )^2 \right]$$
$$= \mathbb{E}\left[ \lim_{n \to \infty}2\sum_{i\ne j}^{n-1} \left(M_{t_{i+1}} – M_{t_i} \right)\left(M_{t_{j+1}} – M_{t_j} \right) \right]$$

Thanks

Best Answer

I think you are trying to prove Meyer's theorem with the integrability assumptions missing. (Roger's & Williams has a proof in vol 2, "Diffusions, Markov processes... Ito's...".) If so, operationally/symbolically/heuristically, your 2nd approach is correct - assume $M_0=0$ a.s. Then for any partition $0=t_0^{(n)}<\dots< t_{k-1}^{(n)}< t_k^{(n)}<\dots<t_n^{(n)}=t$ of $[0,t]$ into $n$ sub-intervals,

\begin{align} M^2_t−⟨M⟩_t &= M_t^2−\lim_{n→∞}\sum_{k=1}^{n}(M_{t_k^{(n)}} − M_{t_{k-1}^{(n)}})^2\\ &=M_t^2−\lim_{n→∞}\left[\sum_{k=1}^{n}M_{t_{k} ^{(n)}}^2 -\sum_{k=1}^{n}M_{t_{k} ^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\ &=M_t^2−\lim_{n→∞}\left[M^2_t+\sum_{k=1}^{n-1}M_{t_{k}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\ &=−\lim_{n→∞}\left[\sum_{k=1}^{n-1}M_{t_{k}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\ &=−\lim_{n→∞}\left[\sum_{k=2}^{n}M_{t_{k-1}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\ &=−\lim_{n→∞}\left[\sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2 - M_0 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\ &=−\lim_{n→∞}\left[2\sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} - M_0\right]\\ &=2\lim_{n→∞}\left[\sum_{k=1}^{n}M_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] + M_0.\\ \end{align} Assume $M_0=0$ a.s. to get rid of the last term; meanwhile the first term tends to an Ito integral,

$$\lim_{n\to\infty}\sum_{k=1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)=\int_0^t X_u dM_u, $$ which is a martingale. Heurestically, by computing the conditional expectations for $s\le t$, say $s=t_{m}^{(n)}$,

\begin{align} \mathbb{E}_s\left[\sum_{k=1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] &= \mathbb{E}_s\left[\sum_{k=1}^{m}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] +\\ &\qquad\mathbb{E}_s\left[\sum_{k=m+1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] \\ &= \sum_{k=1}^{m}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)+\\ &\qquad\mathbb{E}_s\left[\sum_{k=m+1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right]. \\\end{align}

That $M$ is a martingale shows the last term is $0$: \begin{align} \mathbb{E}_s\left[\sum_{k=m+1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] &= \sum_{k=m+1}^{n}\mathbb{E}_s\left[\mathbb{E}_{t_{k-1}^{(n)}}\left[X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right]\right]\\ &= \sum_{k=m+1}^{n}\mathbb{E}_s\left[X_{t_{k-1}^{(n)}} \mathbb{E}_{t_{k-1}^{(n)}}\left[\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right]\right]\\ &= \sum_{k=m+1}^{n}\mathbb{E}_s\left[X_{t_{k-1}^{(n)}} 0\right] \\ &=0. \end{align}

Operationally: $\mathbb E_s[\int_0^t X_u dM_u] = \mathbb E_s[\int_0^s X_u dM_u]+\mathbb E_s[\int_s^t X_u dM_u] = \int_0^s X_u dM_u + \int_s^t \mathbb E_s[ X_u dM_u] = \int_0^s X_u dM_u + \int_s^t \mathbb E_s[ \mathbb E_u[ X_u dM_u]] = \int_0^s X_u dM_u + \int_s^t \mathbb E_s[X_u \underbrace{\mathbb E_u[ dM_u]}_{=0}] = \int_0^s X_u dM_u.$