Compute the characteristic function of Wiener Process (Brownian Motion).

brownian motionprobability distributionsprobability theorystochastic-analysisstochastic-processes

Wiener process is a stochastic process $(W_{t})_{t\in\mathbb{R}_{\geq 0}}$ on some probability space $(\Omega,\mathcal{F},\mathbb{P})$ satisfying the following properties:

$(1)$ $W_{0}\equiv 0$;

$(2)$ For $t>s\geq 0$, $W_{t}-W_{s}$ is independent of $\sigma(W_{r}, r\leq s)$;

$(3)$ $W_{t}-W_{s}\sim\mathcal{N}(0,t-s)$;

$(4)$ All paths are continuous, i.e. $W_{t}$ is a continuous function in $t$.

I am now trying to write out an explicit formula of the characteristic function of Wiener process.

My idea was to write out the joint distribution and then get some density function if possible, but I got stuck.

Below is my attempt:

Since $W_{t_{i}}-W_{t_{i-1}}$ is Gaussian with $\mu=0$ and $\sigma^{2}=(t_{i}-t_{i-1})$ and since the increments $$W_{t_{1}}-W_{t_{0}}, W_{t_{2}}-W_{t_{1}},\cdots, W_{t_{n}}-W_{t_{n-1}}$$ are independent, we know that $$\mathbb{P}[W_{t_{i}}-W_{t_{i-1}}\leq \alpha_{i}, i=1,\cdots,n]=\prod_{i=1}^{n}\dfrac{1}{\sqrt{2\pi(t_{i}-t_{i-1})}}\int_{-\infty}^{\alpha_{i}}e^{\frac{-x^{2}}{2(t_{i}-t_{i-1})}}dx.$$

Now, I want to do a linear change of variables to ge the joint distribution of $(W_{t_{1}},\cdots, W_{t_{n}})$, but I don't know how to do it..

Also, even if we retrieve the joint distribution of $(W_{t_{1}},\cdots, W_{t_{n}})$, it seems that we cannot write out the explicit formula for the characteristic function since we don't know the density..

What can I do? Is there another way to write out the characteristic function? Thank you!

Edit 1:

Okay I think I progressed a little bit:

Let $0=t_{0}<t_{1}<t_{2}<\cdots<t_{n}<\infty$. Then note that
\begin{align*}
\sum_{j=1}^{n}\lambda_{j}W_{t_{j}}&=W_{t_{1}}\Big(\sum_{j=1}^{n}\lambda_{j}-\sum_{j=2}^{n}\lambda_{j}\Big)+W_{t_{2}}\Big(\sum_{j=2}^{n}\lambda_{j}-\sum_{j=3}^{n}\lambda_{j}\Big)+\cdots+X_{t_{n}}\lambda_{n}\\
&=(W_{t_{1}}-W_{t_{0}})\sum_{j=1}^{n}\lambda_{j}+(W_{t_{2}}-W_{t_{1}})\sum_{j=2}^{n}\lambda_{j}+\cdots+(X_{t_{n}}-X_{t_{n-1}})\lambda_{n},
\end{align*}

in the second equality we used $W_{t_{0}}=W_{0}=0$.

Therefore, we can write the characteristic function as
\begin{align*}
\varphi_{t_{1},\cdots, t_{n}}(\lambda_{1},\cdots, \lambda_{n})&=\mathbb{E}\exp\Big(i\sum_{j=1}^{n}\lambda_{j}X_{t_{j}}\Big)\\
&=\prod_{k=1}^{n}\mathbb{E}\exp\Big(i(X_{t_{k}}-X_{t_{k-1}})\sum_{j=k}^{n}\lambda_{j}\Big),
\end{align*}

where the second equality was obtained using the fact that the increments are independent (so the sum in the characteristic function can become to the product of characteristic function).

Now I want to use the fact that the increments are Gaussian to write out the formula, but the problem now is that we have the sum of $\lambda_{k},\cdots,\lambda_{n}$, and I don't really know how to deal with them…

By the way, as what I discussed with Nap D. Lover, my final goal is to use the characteristic function to show the consistency of finite dimensional distribution, if there is another other way to show this, I will give up the current computation happily 🙂

Edit 2:

After some attempt, I derived a general result for my final goal: to prove the existence of Wiener process using characteristic function.

In the middle of construction, I happened to find a formula for more general characteristic function — as long as you have the independent increments, I believe this construction can work all the time by only altering a little bit.

This proof is a long one so I will post it by answering my own question.

However, I don't believe this proof is only possible way to show the existence, since my proof is clearly a little beyond what I asked.

So please let me know and please do not hesitate to post your proof if you have another one. I believe any new proof rather than the proof I am gonna give will be better one 🙂

Thank you guys so much for your upvotes 🙂

Best Answer

Let me firstly introduce my final goal.

I'd like to show that the Wiener process exists by using the characteristic function.

There have been many proofs using explicit constructions, but I'd like to have a measure-theoric one. To do this, we need Kolmogorov Consistency Theorem, see here: https://en.wikipedia.org/wiki/Kolmogorov_extension_theorem.

However, the theorem is about the explicit measure, but I want now to play with characteristic function instead. There are equivalent conditions for characteristic functions to show that a family of finite dimensional distribution is consistent. (I will mention this at the end of my proof where I use the consistency theorem).

Now, let us prove!

Let $\{\phi_{s,t}, s,t\geq 0\}$ be a family of characteristic functions of probability measure $Q_{s,t}$ on $\mathcal{B}(\mathbb{R})$ with $s,t\geq 0$. That is, for $z\in\mathbb{R}$ and $s,t\geq 0$, we have that $$\varphi_{s,t}(z)=\int_{\mathbb{R}}e^{izx}Q_{s,t}(dx).$$

Now we need a lemma:

Lemma: There exists a stochastic process $X=(X_{t})_{t\geq 0}$ with independent increments and has the property that for all $s,t\geq 0$ the characteristic function of $X_{t}-X_{s}$ is equal to $\varphi_{s,t}$ if and only if $$\varphi_{s,t}=\varphi_{s,u}\varphi_{u,t}$$ for all $0\leq s<u<t<\infty$.


Proof of Lemma:

$(\Rightarrow)$. This direction is immediate since $X_{t}-X_{s}=(X_{t}-X_{u})+(X_{u}-X_{s}):=Y_{1}+Y_{2}$. But $Y_{1}$ and $Y_{2}$ are independent, so $$\varphi_{s,t}=\varphi_{Y_{1}+Y_{2}}=\varphi_{Y_{1}}\varphi_{Y_{2}}=\varphi_{u,t}\varphi_{s,u}.$$

$(\Leftarrow)$. Let $n\in\mathbb{N}$, $0=t_{0}<t_{1}<\cdots<t_{n}<\infty$, $Y:=(X_{t_{0}}, X_{t_{1}}-X_{t_{0}},\cdots, X_{t_{n}}-X_{t_{n-1}})^\intercal$ and $z:=(z_{0},\cdots, z_{n})$, then using the independence of increments yields us $$\varphi_{Y}(z_{0},z_{1},\cdots, z_{n})=\mathbb{E}e^{i<z,Y>}=\varphi_{X_{t_{0}}}(z_{0})\varphi_{t_{0},t_{1}}(z_{1})\cdots \varphi_{t_{n-1},t_{n}}(z_{n}),$$ where the distribution of $X_{t_{0}}$ is an arbitrary probability measure $Q_{0}$ on $\mathcal{B}(\mathbb{R})$.

For $X_{t_{0},\cdots, t_{n}}=(X_{t_{0}},\cdots, X_{t_{n}})^{\intercal}$, we have that $X_{t_{0},\cdots, t_{n}}=AY$ where $A=\begin{pmatrix} 1 & 0 & 0 & \cdots & 0\\ 1 & 1 & 0 & \cdots & 0 \\ 1& 1 & 1 & \cdots &0 \\ \cdots& \cdots & \cdots & \cdots & \cdots\\ 1& 1 & 1 & \cdots & 1 \end{pmatrix}$, and thus we have $$\varphi_{X_{t_{0},\cdots, t_{1}}}(z)=\varphi_{AY}(z)=\mathbb{E}e^{i<z,AY>}=\mathbb{E}e^{i<A^{\intercal}z, T>}=\varphi_{Y}(A^{\intercal}z).$$

It then follows that the finite dimensional distribution of $X_{t_{0},\cdots, t_{1}}$ has the characteristic function $$\varphi_{X_{t_{0},\cdots, t_{n}}}(z)=\varphi_{Q_{0}}(m_{0})\varphi_{t_{0},t_{1}}(m_{1})\cdots\varphi_{t_{n-1},t_{n}}(m_{n}),$$ where $m=(m_{0},\cdots, m_{n})^{\intercal}=A^{\intercal}z$ and thus it has the expression that [ \left{ \begin{array}{ll} m_{0}=z_{0}+\cdots+z_{n}\\ m_{1}=z_{1}+\cdots+z_{n}\\ \cdots\\ m_{n}=z_{n}. \end{array}

Hence, $\varphi_{X_{t_{0}}}=\varphi_{Q_{0}}$ and $\varphi_{X_{t_{1},\cdots, t_{n}}}(z_{1},\cdots, z_{n})=\varphi_{X_{t_{0}},\cdots, t_{n}}(0,z_{1},\cdots,z_{n})$ hold for all $z_{i}\in\mathbb{R}$.

To show the existstence of such a process $X$, we just construct the family of characteristic functions $$\{\varphi_{t_{0}},\varphi_{t_{0},t_{1},\cdots, t_{n}},\varphi_{t_{1},\cdots, t_{n}},\ 0=t_{0}<t_{1}<\cdots<t_{n}<\infty,\ n\in\mathbb{N}\}$$ from $\varphi_{Q_{0}}$ and $\{\varphi_{s,t},0\leq s<t\}$ as above and therefore we have $$\varphi_{t_{0}}=\varphi_{Q_{0}},\ \varphi_{t_{1},\cdots, t_{n}}(0,z_{1},\cdots, z_{n})=\varphi_{t_{0},t_{1},\cdots, t_{n}}(0,z_{1},\cdots, z_{n}),\ z_{i}\in\mathbb{R},$$ and $$\varphi_{t_{0},\cdots, t_{n}}(z)=\varphi_{t_{0}}(z_{0}+\cdots+z_{n})\varphi_{t_{0}, t_{1}}(z_{1}+\cdots+z_{n})\cdots\varphi_{t_{n-1},t_{n}}(z_{n}).$$

Now we just check the consistency condition using characteristic function, namely:

(1) $\varphi_{t_{j_{0}},\cdots, t_{j_{n}}}(z_{j_{0}},\cdots, z_{j_{n}})=\varphi_{t_{0},\cdots, t_{n}}(z_{0},\cdots, z_{n})$ under any permutation $j:(0,1,\cdots, n)\mapsto (j_{0}, j_{1}\cdots,j_{n});$

(2) $\varphi_{t_{0},\cdots, t_{\ell-1}, t_{\ell+1},\cdots, t_{n}}(z_{0},\cdots, z_{\ell-1},z_{\ell+1},\cdots, z_{n})=\varphi_{t_{0},\cdots, t_{n}}(z_{0},\cdots, 0,\cdots, z_{n}),$ for all $z_{0},\cdots, z_{n}\in\mathbb{R}$ and for all $\ell\in\{1,\cdots, n\}$.

This first one is obvious, and the second one holds due to $$\varphi_{t_{\ell-1},t_{\ell}}(0+z_{\ell+1}+\cdots+z_{n})\varphi_{t_{\ell},t_{\ell+1}}(z_{\ell+1}+\cdots+z_{n})=\varphi_{t_{\ell-1},t_{\ell+1}}(z_{\ell+1},\cdots, z_{\ell}),$$ for all $\ell\in\{1,\cdots, n\}$.

The existence of $X$ follows immediately from the Kolmogorov Consistency Theorem.


Proof of my goal:

Now to prove the existence of Wiener process $W$, we use the independent Gaussian increments to yield us $$\varphi_{s,t}(z)=\mathbb{E}e^{iz(W_{t}-W_{s})}=e^{-\frac{(t-s)z^{2}}{2}},\ \ \varphi_{s,u}=\mathbb{E}e^{iz(W_{u}-W_{s})}=e^{-\frac{(u-s)z^{2}}{2}}, \varphi_{u,t}=\mathbb{E}e^{iz(W_{t}-W_{u})}=e^{-\frac{(u-t)z^{2}}{2}},$$ and it is easy to see that for all $0\leq s<u<t$, $$e^{-\frac{(u-t)z^{2}}{2}}e^{-\frac{(u-s)z^{2}}{2}}=e^{-\frac{(t-s)z^{2}}{2}},$$ which implies that $$\varphi_{s,u}(z)\varphi_{u,t}(z)=\varphi_{s,t}(z)\ \text{for all}\ z\in\mathbb{R}.$$

The result follows immediately from the lemma.

To "close" this post, I am gonna accept my own answer for now. But I will definitely change to accept someone else's answer if there is a new proof.