Probability measure $P_X$ on the space of paths of a Lévy process $(X_t)_{t \ge 0}$ determined by $P_{X_1}$

levy-processesstochastic-processes

Let $X=(X_t)_{t \ge 0}$ a Lévy process for real valued $X_i: \Omega \to \mathbb{R}$. I heard about a slogan on Lévy processes that
the

"the probability measure on the space of paths
$t \mapsto X_t(\omega)$ (for fixed event $\omega \in \Omega$)
is completely determined only by probability measure
$P_{X_1}$ of $X_1$ of single $X_i$, say wlog $X_1$.
"

Why?
Recall that the probability measure on the space of paths
is defined as pushforward measure $X(P)$ of measure $P_{\Omega}$ on
$\Omega$ with respect measurable function

$$X: \Omega \to \mathbb{R}^{\mathbb{R}_{\ge 0}},
\omega \mapsto (X_t(\omega))_{t \ge 0} $$

It is well known that $X(P)$ is uniquely determined
by marginal distributions
$P^{\mathbb{R}_{\ge 0}}_{t_1,…, t_n}(P_X):=
P_{X_{t_1},…, X_{t_n}}$
for all
$X_{t_1},…, X_{t_n}$ and natural projection map
$P^{\mathbb{R}_{\ge 0}}_{t_1,…, t_n}: \mathbb{R}^{\mathbb{R}_{\ge 0}}
\to \mathbb{R}^n$
.

Therefore the question is why for all
$X_{t_1},…, X_{t_n}$ the
marginal distributions $P_{X_{t_1},…, X_{t_n}}$ depend
only on $P_{X_1}$?

Best Answer

For convenience, in stead of distribution functions, we use characteristic functions(c.f.) to interpret above fact.

Let $\psi_{s,t}(u)=\mathsf{E}[\exp(iu(X_t-X_s))], 0<s<t$ be the c.f. of increment $X_t-X_s$, then \begin{gather*} \psi_{r,t}(u)=\psi_{r,s}\psi_{s,t},\quad 0<r<s<t, \tag{1}\\ \text{(property of independent increments)}, \\ \psi_{s,s+t}(u)=\psi_{0,t}(u),\quad s,t>0,\quad\text{(stationary increments property)}\tag{2} \end{gather*} and $\psi_{0,t}(u)$ is continuous in $t$ by the stochastic continuity of $X$. From (2),(1), \begin{align*} \psi_{0,s}\psi_{0,t}&=\psi_{0,s}\psi_{s,s+t}\\ &=\psi_{0,s+t}, \qquad s,t >0 \tag{3} \end{align*}
Furthermore, \begin{equation*} \psi_{0,t}=(\psi_{0,1})^t=(\mathsf{E}[\exp(iuX_1)])^t. \tag{4} \end{equation*}
(c.f. Ken-Iti Sato, Lévy Processes and Infinitively Divisible Distributions, Cambridge University Press(1999), p.35, Th.7.10.)

Now let $\phi_{t_1,\cdots,t_n}(u_1,\cdots,u_n)$ be the c.f. of $(X_{t_1},\cdots,X_{t_n})$, then, for $0=t_0<t_1<\cdots<t_n$, \begin{align*} &\phi_{t_1,\cdots,t_n}(u_1,\cdots,u_n) =\mathsf{E}\Big[\exp\Big(\mathrm{i}\sum_{j=1}^{n}u_jX_{t_j}\Big)\Big]\\ &\quad = \mathsf{E}\Big[\exp\Big(\mathrm{i}\sum_{j=1}^{n}u_j\sum_{k=1}^j (X_{t_k}-X_{t_{k-1}})\Big)\Big] \\ &\quad = \mathsf{E}\Big[\exp\Big(\mathrm{i}\sum_{k=1}^{n}\Big(\sum_{j=k}^{n}u_j\Big) (X_{t_k}-X_{t_{k-1}})\Big)\Big]\\ &\quad = \prod_{k=1}^n\psi_{t_{k-1},t_k}\Big(\sum_{j=k}^nu_j\Big)\\ &\quad = \prod_{k=1}^n\Big(\psi_{0,1}\Big(\sum_{j=k}^nu_j\Big)\Big)^{(t_k-t_{k-1})}. \end{align*} this means that all $X_{t_1},\cdots, X_{t_n}$, the c.f $\phi_{t_1,\cdots,t_n}(u_1,\cdots,u_n)$ and marginal distributions $P_{X_{t_1},\cdots, X_{t_n}}$ depend on $P_{X_1}$ only.