[Math] Convergence of Ornstein-Uhlenbeck process as a scaled Brownian Motion

brownian motionconvergence-divergencemarkov-processstochastic-processes

Let $W$ be a standard Brownian motion. Let $\alpha,\sigma^2 >0$, and let $X_0$ be a $\mathbb{R}$-valued random variable with distibution $\nu$ that is independent of $\sigma(W_t,t\geq 0)$. Now define the scaled BM $X$ by
$$ X_t = \exp\{-\alpha t\}\left( X_0 + W_{\sigma^2( \exp\{2\alpha t\}-1)/2\alpha}\right).$$

There are a few things I want to show:

(1) I want to show that the defined process is a Markov process which converges in distribution to an $N(0,\sigma^2/2 \alpha)$ distributed random variable.

(2) I'd like to show that if $X_0 \overset{d}{=} N(0,\sigma^2/2 \alpha)$ then $X_t \overset{d}{=} N(0,\sigma^2/2 \alpha)$, in other words it is invariant for the Markov process.

(3) And I want to show that $X_t$ is Gaussian with mean function $m(t)=0$ and covariance function $r(s,t) = \sigma^2\exp\{-\alpha|t-s|\}/2\alpha$.

Now to start the proof I would like to pick the "right" filtration but I have no idea how I should pick this and could I prove that it is a Markov process? The only abstract definition I know how to prove this is with transition kernels, yet I don't know how to apply (or define) these here.

My thinking so far: I tried to rewrite $X_t$ in a more useful way. Namely $X_t = \exp\{-\alpha t \} X_0 + \sigma \exp\{ – \alpha t \} \int_0^t \exp\{\alpha s \} d W_s$. But this is using stochastic integration which I guess I should not use as it should be done in the framework of stochastic processes. But by the rewritten $X_t$ above we can note that we have the integral of a nonrandom function $f(s)$ vs $dW_s$ which is always a Gaussian random variable with mean zero and variance $\int f(s)^2 ds$. Is there some other way to explain this? Also then when letting $t$ go to infinity we have get the invariant distribution. However I still have no clue how to prove that it is a Markov process. Thanks for any help.

Best Answer

Notice that $$ X_t = \exp\{-\alpha t\}\left( X_0 + W_{\sigma^2( \exp\{2\alpha t\}-1)/2\alpha}\right) =\exp\{-\alpha t \} X_0 + \sigma \exp\{ - \alpha t \} \int_0^t \exp\{\alpha s \} d W_s. $$ Now we can simply note that the process $X_t$ is a Gaussian process it being a linear combination of the Gaussian process $W_t$. Its mean and variance are now easily calculated by It$\hat{\text{o}}$ isometry namely if $\mathbb{E} X_t^2 < \infty$ and $X_0 = \delta_x$ as in the example then \begin{align*} \mathbb{E} X_t &= e^{- \alpha t} \mathbb{E} X_0 = e^{- \alpha t} x \\ \mathbb{E} \left( X_t - \mathbb{E} X_t \right)^2 &= \sigma^2 \int_0^t \left(e^{-\alpha(t -s)} \right)^2 ds = \frac{\sigma^2}{2 \alpha}\left( 1 - e^{-2 \alpha t}\right). \end{align*} As the mean of the Brownian term gives zero and from the companion course the know that the variance of $\delta_x$ is zero and $\mathrm{Var} \left(\int_0^t f(s)dW_s \right) = \int_0^t f(s)^2 ds$. Thus when as $\alpha >0$ we have that $$ X_t \overset{d}{\rightarrow} N\left( 0, \frac{\sigma^2}{2 \alpha} \right),$$ by simply noting that the mean of $X_t$ goes to zero for $t \rightarrow \infty$ and the variance goes to $\frac{\sigma^2}{2 \alpha}$ as $t \rightarrow \infty$ and $X_t$ is Gaussian itself.

To prove that the process is infact a Markov process, as we have that $X_t$ is in fact the solution to the Ornstein-Uhlenbeck SDE defined by $$ d X_t = \alpha( X_0 - X_t) dt + \sigma dW_t. $$ This is a Lipschitz SDE (coefficients are Lipschitz continuous obviously) and these have Markovian solutions. Hence $X_t$ must be Markovian.

Now if $X_0 \overset{d}{=} N(0,\sigma^2/2\alpha)$ then $X_t$ is still Gaussian so we only need to calculate its mean and variance. Hence \begin{align*} \mathbb{E} X_t &= e^{-\alpha t} \mathbb{E} X_0 = 0 \\ \mathbb{E} \left( X_t - \mathbb{E} X_t \right)^2 &= \mathrm{Var}(e^{-\alpha t} X_0) + \frac{\sigma^2}{2 \alpha}\left( 1 - e^{-2 \alpha t}\right) \\ &= e^{-2\alpha t} \frac{\sigma^2}{2\alpha} + \frac{\sigma^2}{2 \alpha}\left( 1 - e^{-2 \alpha t}\right) = \frac{\sigma^2}{2 \alpha}. \end{align*} By previous calculations and thus $X_t \overset{d}{=} N(0,\sigma^2/2\alpha)$ and is thus the invariant distribution of $X$.

To calculate the covariance function of $X$ we use a simple rule of It$\hat{\mathrm{o}}$ integrals w.r.t. a BM and continuous function $f$ which states that if we define $I_f(t) = \int_0^t f(s) dW_s$ then $\mathrm{Cov} \left( I_f(t),I_f(s)\right) = \int_0^{t \wedge s} f^2(u) du$. Hence using this we obtain that for $t \neq s$ assuming $X_0$ has mean $0$ and variance $\tau^2$ \begin{align*} \ \mathrm{Cov} \left( X_t, X_s \right) &= e^{- \alpha (s+t)} \mathrm{Cov}(X_0,X_0) + \int_0^{t \wedge s} \left( \sigma e^{- \alpha(s-t)} \right)^2 ds \\ &= e^{- \alpha (s+t)} \tau^2 + \frac{\sigma^2}{2\alpha} \left( e^{-\alpha |s-t|} - e^{-\alpha (s+t)}\right), \end{align*} Now as $\tau^2=\frac{\sigma^2}{2\alpha}$ we obtain that the covariance equals $\frac{\sigma^2}{2\alpha} \exp\{ - \alpha |s-t| \}$. Furthermore if $\mathbb{E} X_0 = 0$ then $\mathbb{E} X_t = e^{-\alpha t} \mathbb{E} X_0 = 0$ as the Brownian motion integral has expectation zero.