Two valued stopping time gives a martingale

convergence-divergencemartingalesstopping-times

Let $T:S\to\mathbb{N}$ be a stopping time.
Let $(X_n)_n$ be an $(F_n)_n$-adapted process that is integrable.

How do we prove that $X_n$ is a martingale $\iff$ for all $T$ that take maximally two values it holds $\mathbb{E}[X_T]=\mathbb{E}[X_0]$?

$\implies$ I think we should use optimal stopping here, but I'm not sure how.
I know that $\mathbb{E}[X_n]=\mathbb{E}[X_0]$ for all $n$ and I think we want to say that $\mathbb{E}[X_T]=\mathbb{E}[\lim X_{T\land n}]=\lim\mathbb{E}[X_{T\land n}]=\lim \mathbb{E}[X_0]=\mathbb{E}[X_0]$.
But when are we allowed to make the step $\mathbb{E}[\lim X_{T\land n}]=\lim\mathbb{E}[ X_{T\land n}]$?

$\impliedby$ We want to show that $\mathbb{E}[X_n|F_{n-1}]=X_{n-1}$. Where do we use stopping times in all of this?

Best Answer

$(\implies:)$ You have the right idea to use optional stopping but you don't need to take a limit since $T$ is a bounded stopping time (it takes at most two values). Fix your stopping time $T$ taking values in $\{a,b\} \subseteq \mathbb{N}$. Then $T < \max\{a,b\} + 1 = K$ and $Y_t = X_{t \wedge K}$ is a uniformly integrable martingale. So by optional stopping $$\mathbb{E}[X_0] = \mathbb{E}[Y_0] = \mathbb{E}[Y_T] = \mathbb{E}[X_{T \wedge K}] = \mathbb{E}[X_T ]$$ since $T \wedge K = T$.

$(\impliedby:)$ By definition of the conditional expectation, we need to show that if $A \in F_{n-1}$ then $\mathbb{E}[X_n 1_A] = \mathbb{E}[X_{n-1} 1_A]$. So fix $A \in F_{n-1}$ and define $T(\omega) = n$ if $\omega \not \in A$ and $T(\omega) = {n-1}$ otherwise. Without loss of generality, here I assume $X_0 = 0$ (otherwise consider the process $X_t - X_0$ which satisfies the same conditions). Then, by our assumption applied with this choice of $T$, $$ 0 = \mathbb{E}[X_T] = \mathbb{E}[X_{n} 1_{A^c}] + \mathbb{E}[X_{n-1}1_A] = \mathbb{E}[X_n] - \mathbb{E}[X_n 1_A] + \mathbb{E}[X_{n-1}1_A]$$ Finally, we notice that using the stopping time $S=n$, we have $0 = \mathbb{E}[X_S] = \mathbb{E}[X_n]$ so that the above says that $$0 = - \mathbb{E}[X_n 1_A] + \mathbb{E}[X_{n-1}1_A] \implies \mathbb{E}[X_n 1_A] = \mathbb{E}[X_{n-1}1_A]$$ which is what we set out to prove.