[Math] Why are these function of finite variation

bounded-variationstochastic-processes

Dealing with Itô, it simplifies a lot if you have terms which are continuous and of finite variation, since these terms have zero quadratic variation.
I know that every increasing function has finite variation. But I have some troubles to argue why the following processes should be of finite variation. Suppose we have a predictable process $X_t$, why are the following two processes of finite variation?

  1. $\int_0^t X_s ds$
  2. $e^{\int_0^tX_sds}$

If $X_s$ would be positive, then everything is clear. But this must not be the case. So why are these processes of finite variation?

Best Answer

Note that the processes $$X(t,\omega)^+ := \max\{0,X(t,\omega)\} \qquad \qquad X(t,\omega)^- := \max\{0,-X(t,\omega)\}$$ are predictable and

$$\int_0^t X(r) \, dr = \int_0^t X_r^+ \, dr - \int_0^t X_r^- \, dr$$

Both integrals on the right-hand side are increasing in $t$, thus $t \mapsto \int_0^t X(r) \, dr$ is of bounded variation.

Suppose that $X$ has càdlàg sample paths. Let $\omega \in \Omega$, $T>0$ and $$c := \sup\left\{\int_0^s \left|X(r,\omega)\right| \, dr ; s \in [0,T]\right\}<\infty.$$

Then, by the mean value theorem,

$$\begin{align*} \left|\exp \left(\int_0^t X_r \, dr \right) - \exp \left( \int_0^s X_r \, dr \right) \right| = \exp(\xi) \cdot \left| \int_0^t X_r \, dr - \int_0^s X_r \, dr \right| \end{align*}$$

for any $s,t \in [0,T]$ and some $\xi=\xi(\omega) \in [-c,c]$. Thus

$$ \left|\exp \left(\int_0^t X_r \, dr \right) - \exp \left( \int_0^s X_r \, dr \right) \right| \leq e^c \cdot \left| \int_0^t X_r \, dr - \int_0^s X_r \, dr \right|$$

Consequently, the claim follows from the fact that $[0,T] \ni t \mapsto \int_0^t X(r) \, dr$ is of bounded variation.

Related Question