Càdlàg Feller process is quasi-left-continuous

convergence-divergencemarkov-processmartingalesprobability theorystopping-times

I've been working in Chung's "Lectures from Markov Processes to Brownian Motion", and I got stuck at Exercise 1 from 2.4. The objective of the problem is to give a short proof of the quasi-left-continuity of a Feller process $(X_t)_{t \geq 0}$ (Markov property, càdlàg, Feller property): let $(T_n)_n$ a sequence of stopping times, with $T_n \nearrow T$. Therefore, $\lim_{n \to \infty} X_{T_n} = X_T,$ over the set $\{T < +\infty \}$.

There is a hint for the exercise: for every $\alpha > 0$, $f$ positive, bounded and continuous, $(e^{-\alpha t}U^\alpha f(X_t))_{t \geq 0}$ is a right continuous supermartingale (with $U^\alpha f(x) \doteq \int_0^\infty e^{- \alpha s} P_sf(x)ds$, where $(P_t)_t$ denotes the semigroup of $(X_t)_{t \geq 0}$). Using this, the hint states that $$\lim_{n \to \infty} U^\alpha f(X_{T_n}) = \mathbb{E}\left( U^\alpha f(X_T) \big| \bigvee_{m=1}^\infty \mathcal{F}_{T_m} \right).$$ I wonder why the last limit holds.

Thank you in advance!

Best Answer

I don't see how to give a short proof of the stated identity (... perhaps I'm missing something). I will use the following general statement on conditional expectations which is a consequence of Lévy's convergence theorem

Lemma Let $(Y_n)_n$ be a sequence of random variables converging almost surely to some random variable $Y$ and satisfying $|Y_n| \leq K$ for some constant $K$ (not depending on $n$). Let $(\mathcal{G}_n)_{n \in \mathbb{N}}$ be a filtration and set $\mathcal{G}_{\infty} := \bigvee_{n=1}^{\infty} \mathcal{G}_n$. Then $$\mathbb{E}(Y \mid \mathcal{G}_{\infty}) = \lim_{n \to \infty}\mathbb{E}(Y_n \mid \mathcal{G}_n) \quad \text{a.s.}$$

Fix $t>0$. By the continuity of $U_{\alpha}f$ and the càdlàg property of the sample paths, we have $$U_{\alpha} f(X_{(T+t)-}) = \lim_{n \to \infty} U_{\alpha} f(X_{T_n+t})$$ for any sequence of stopping times $T_n$ with $T_n \uparrow T$. Applying the above lemma with $$Y := U_{\alpha} f(X_{(T+t)-}) \qquad Y_n := U_{\alpha} f(X_{T_n+t}) \qquad \mathcal{G}_n := \mathcal{F}_{T_n}$$ we find that

\begin{align*} \mathbb{E}(U_{\alpha} f(X_{(T+t)-}) \mid \bigvee_{n=1}^{\infty} \mathcal{F}_{T_n}) &= \lim_{n \to \infty} \mathbb{E}(U_{\alpha} f(X_{T_n+t}) \mid \mathcal{F}_{T_n}). \end{align*}

It now follows from the strong Markov property that

\begin{align*} \mathbb{E}(U_{\alpha} f(X_{(T+t)-}) \mid \bigvee_{n=1}^{\infty} \mathcal{F}_{T_n}) &= \lim_{n \to \infty} P_t(U_{\alpha}f)(X_{T_n}) \tag{1} \end{align*}

If we let $t \downarrow 0$, then it follows from the fact that $U_{\alpha}f$ is bounded and continuous and that $X$ has càdlàg sample paths that the left-hand side of $(1)$ converges to

$$\mathbb{E}(U_{\alpha}f(X_T) \mid \bigvee_{n=1}^{\infty} \mathcal{F}_{T_n}).$$

It remains to show that the right-hand side of $(1)$ converges to $\lim_{n \to \infty} U_{\alpha} f(X_{T_n})$ as $t \to 0$. To this end, we note that, by the Feller property, $$\|P_t (U_{\alpha}f)- (U_{\alpha} f)\|_{\infty} \xrightarrow[]{t \to 0} 0,$$

and so

\begin{align*} \limsup_{t \to 0} \left| \lim_{n \to \infty} P_t(U_{\alpha} f)(X_{T_n}) - \lim_{n \to \infty} U_{\alpha} f(X_{T_n}) \right| &= \limsup_{t \to 0} \lim_{n \to \infty} |P_t (U_{\alpha} f)(X_{T_n})-(U_{\alpha}f)(X_{T_n})| \\ &\leq \limsup_{t \to 0} \|P_t(U_\alpha f) - U_{\alpha} f\|_{\infty} =0. \end{align*}

Related Question