[Math] Fokker-Planck equation for Ornstein-Uhlenbeck

stochastic-calculusstochastic-processes

I am trying to solve the following Ornstein-Uhlenbeck SDE

$dx = -\alpha x dt + \gamma dW$

where $x$ is a stochastic process, W is a standard Wiener process (Brownian motion) and $\alpha$ and $\gamma$ are both constants. I want to derive the probability density function $p(x,t)$ by using the Fokker-Planck equation

$\dfrac{\partial p(x,t)}{\partial t} = -\dfrac{\partial}{\partial x}[-\alpha x p(x,t)] + D\dfrac{\partial^2 p(x,t)}{\partial x^2}$

with $D = \gamma^2/2$ and initial condition $p(x,0) = \delta(x-x_0)$. I start by taking the Fourier transform with respect to $x$ of both sides, defined as

$\mathcal{F}\left\{f(x)\right\} = \dfrac{1}{\sqrt{2\pi}} \displaystyle\int_{-\infty}^{\infty} f(x) e^{-ikx}dx$

$\mathcal{F}^{-1}\left\{\hat{f}(k)\right\} = \dfrac{1}{\sqrt{2\pi}} \displaystyle\int_{-\infty}^{\infty} \hat{f}(k) e^{ikx}dk$

and using the identity

$\mathcal{F}\left\{\dfrac{\partial}{\partial x}[x p(x,t)] \right\} = k\dfrac{\partial\hat{p}(k,t)}{\partial k}$

This gives me the transformed PDE

$\dfrac{\partial \hat{p}(k,t)}{\partial t} = \alpha k \dfrac{\partial\hat{p}(k,t)}{\partial k} – k^2 D\hat{p}(k,t)$

With transformed initial condition

$\hat{p}(k,0) = \mathcal{F}\left\{p(x,0) \right\} = \dfrac{e^{-ix_0 k}}
{\sqrt{2\pi}}$

At this point, I try using the method of characteristics to solve for $\hat{p}(k,t)$ and I always end up with the solution

$\hat{p}(k,t) = \dfrac{1}{\sqrt{2\pi}} \exp\left\{-\dfrac{D}{2\alpha}\left(e^{2\alpha t} – 1\right)k^2 – ix_0 e^{\alpha t}k \right\}$

The problem I have is that when I go ahead to inverse Fourier transform this function, I end up with a term $x_0 e^{\alpha t}$ for the mean of the process $x(t)$, but solving for the expectation of the SDE directly gives me $\mathbb{E}\left[x(t)\right] = x_0 e^{-\alpha t}$. I can't figure out why there is a sign difference in my exponent. Is there a mistake in one of my steps? And, would it be possible to obtain $p(x,t)$ from the Kolgomorov backward equation instead? It seems it would be better suited for this problem.

Best Answer

Somewhat embarrassingly I have never familiarized myself with the "forward/backward equation" nomenclature so I won't comment on that directly here. Instead, observe that if $(X_{t})_{t \geq 0}$ satisfies \begin{equation*} dX_{t} = - \alpha X_{t} dt + dB_{t}, \end{equation*} then, given a nice function $f : \mathbb{R} \to \mathbb{R}$, the process $(U_{t})_{t \geq 0} = (f(X_{t})_{t \geq 0}$ satisfies \begin{equation*} dU_{t} = \frac{1}{2} f'(X_{t}) dB_{t} + \left(\frac{1}{2} f''(X_{t}) - \alpha X_{t} f'(X_{t})\right) dt. \end{equation*} Thus, we deduce that the function $u(x,t) = \mathbb{E}^{x}(f(X_{t})$ satisfies \begin{equation*} \frac{\partial u}{\partial t} = \frac{1}{2} \frac{\partial^{2}u}{\partial x^{2}} - \alpha x \frac{\partial u}{\partial x}. \end{equation*} Note that the adjoint of the generator $L = \frac{1}{2} \frac{\partial^{2}}{\partial x^{2}} - \alpha x \frac{\partial}{\partial x}$ (acting on $L^{2}(\mathbb{R},dx)$) is given by $L^{*}v = \frac{1}{2} \frac{\partial^{2}v}{\partial x^{2}} + \frac{\partial}{\partial x}(\alpha x v)$, as can be verified using integration by parts.

Now if $p$ is the transition kernel, then we know that $u(x,t) = \int_{\mathbb{R}} f(y) p(t,x,y) \, dy$. Therefore, one way to find $p$ is to solve for $u$ using the Fourier transform, which will give an integral for $u$ in terms of $f$. Also note that the equation solved by $u$ and the integral representation formula implies $\frac{\partial}{\partial t} p(t,x,y) = \frac{1}{2} \frac{\partial^{2}}{\partial x^{2}} p(t,x,y) - \alpha x \frac{\partial}{\partial x} p(t,x,y)$.

Alternatively, suppose $\varphi$ is a nice function adn $v$ solves the adjoint equation \begin{equation*} \left\{ \begin{array}{r l} - \frac{\partial v}{\partial t} = L^{*}v & \text{in} \, \, (0,T) \times \mathbb{R}^{d} \\ v(x,T) = \varphi(x) \end{array} \right. \end{equation*} Observe that \begin{align*} \frac{d}{dt} \left\{ \int_{\mathbb{R}} u(x,s) v(x,s) \, dx \right\} &= \int_{\mathbb{R}} \left(\frac{\partial u}{\partial t} v + u \frac{\partial v}{\partial t} \right) \, dx \\ &= 0. \end{align*} and, thus, \begin{equation*} \int_{\mathbb{R}} u(x,T) \varphi(x) \, dx = \int_{\mathbb{R}} f(x) v(x,0) \, dx. \end{equation*} This implies \begin{equation*} \int_{\mathbb{R}}f(x) v(x,0) \, dx = \int_{\mathbb{R}} \int_{\mathbb{R}} f(y) p(T,x,y) \varphi(x) \, dx dy. \end{equation*} Since $f$ was arbitrary, we deduce $v(x,0) = \int_{\mathbb{R}} \varphi(x) p(T,x,y) \, dy$. Since there's nothing special about $T$, we see that $v(x,s) = \int_{\mathbb{R}} \varphi(x) p(T - s, x,y) \, dy$. Since this is true independently of the choice of $\varphi$, we see that $p(t,x,y)$ solves \begin{equation*} -\frac{\partial}{\partial t} p(t,x,y) = \frac{1}{2} \frac{\partial^{2}}{\partial y^{2}} p(t,x,y) + \frac{\partial}{\partial y}\left\{\alpha x p(t,x,y)\right\}. \end{equation*}

Long story short, the equation you were solving for $p$ was off by a minus sign. If you wanted, you could have used the easier-to-remember equation with $L$ instead of the equation with $L^{*}$.