[Math] Different versions of Girsanov theorems

stochastic-calculusstochastic-processes

I am reading two different versions of Girsanov theorem regarding change of measure to preserve Brownian motion.

Wikipedia has the following Girsanov theorem:

If $X$ is a continuous process and $W$ is Brownian Motion under measure $P$ then
$$
\tilde W_t =W_t – \left [ W, X \right]_t
$$
is Brownian motion under $Q$.

The probability measure $Q$ is defined on $\{\Omega,\mathcal{F}\}$ such that we have Radon–Nikodym derivative
$$
\frac{d Q}{d P} |_{\mathcal{F}_t} = Z_t = \exp \left ( X_t – \frac{1}{2} [X]_t \right )
$$
$X_t$ is a process with $X_0 = 0$ and adapted to the filtration of the Brownian motion.

Shreve's Stochastic Calculus in Finance has the folloing Girsanov Theorem:

Let $\Theta(t), t \in [0,T]$ be a stochastic process adapted to the filtration of the Brownian motion $W(t), t \in [0,T]$. Let $P$ be the probability measure of the underlying space space.

Define
$$
Z(t) := \exp(-\int_0^t \Theta(u) dW(u) – \frac{1}{2} \int_0^t \Theta^2(u) du )
$$
Let $\tilde{P}$ be the probability measure s.t. it is absolutely continuous wrt $P$ and its Radon-Nikodym derivative is $Z(T)$.

Then
$$
\tilde{W}_t = W(t) + \int_0^t \Theta(u) du, \quad t \in [0,T]
$$
is a Brownian motion under $\tilde{P}$.

When comparing the two versions, I notice the following things:

  1. Wikipedia's $X$ and Shreve's $\Theta$ play the same role, but why are their definitions of $\tilde{W}$ based on $X$ and on $\Theta$ respectively different.

  2. Why are the Radon-Nikodym derivatives of the new measure wrt the original measures also different in the two versions.

    The processes $Z_t$ and $Z(t)$ in the two versions play the same role, but why are their definitions based on $X$ and $\Theta$ different.

    Also the R-N derivative in Wikipedia seems to be specified for each $t \in [0, \infty)$, while the one in Shreve's is just for $T$ case?

I was wondering if someone can point out the relations between the two versions, and explain why there are the above differences despite their similarities?

Thanks and regards!

Best Answer

This is somehow lengthy, but I think you will better understand how Girsanov actually works. The theorem you stated are more an application of Girsanov. The motivation behind Girsanov is the following: You are interested in how semimartingales behave under a change of measure. Since the finite variation part do not change, the question reduces to how local martinagles behave under a change of measure. As I was taught, Girsanov answers this question:

Suppose you have $Q\approx P$ and assume for simplicity that the density process $Z$ is continuous. If you have a continuous local martingale $M$ null at zero (wrt $P$), i.e. $M\in \mathcal{M}_{0,loc}^c(P)$, then $$\bar{M}=M-\int\frac{1}{Z}d\langle Z, M\rangle = M-\langle L,M\rangle \in \mathcal{M}_{0,loc}^c(Q)$$ where we write $Z=Z_0\mathcal{E}(L)$.

This is what I would refer to Girsanov's theorem. Note that this implies that in particular $M$ is $Q$-Semimartingale. Of course there are generalizations of this theorem ($Z$ general etc.).

Both of your theorems are the same. It is a special case of Girsanov. Take $M=W$, where $W$ is $P$ Brownian Motion. As an application of Girsanov you get:

If $W$ is a $P$-Brownian Motion and $Q\approx P$ with density process of the form $Z=\mathcal{E}(\int \Theta_s dW_s)$, for a predictable process $\Theta$. Then $W$ is under $Q$ a Brownian Motion with drift, i.e.$$ W=\bar{W}+\int\Theta_s ds$$ for a $Q$-Brownian Motion $\bar{W}$.

This is an immediate consequence of Girsanov and the proof is straight forward using Lévy's characterization of Brownian Motion. However in most cases you have to go the other way around: Usually you do not have $Q\approx P$. This means you have a probability measure and want to construct an equivalent probability measure $Q$ such that the density process $Z$ is a stochastic exponential. Hence you start with $L\in\mathcal{M}_{0,loc}^c(P)$ and define $Z:=\mathcal{E}(L)$. You hope that $Z$ can be used to define an equivalent probability measure $Q$, as $\frac{dQ}{dP}=Z_\infty$. We have $\mathcal{E}(L)=Z$, hence $Z$ is a local martingale and strictly positive. Therefore it is a supermartingale on $[0,\infty)$ (use Fatou to prove that)! By the supermartingale convergence theorem $Z_t$ converges $P-a.s.$ to $Z_\infty$. The problem is, $Z_\infty$ can be $0$ or $E[Z_\infty]<1$ (or both together). As already mentioned you want do define $\frac{dQ}{dP}:=Z_\infty$. You want at least that this $Q$ is absolutely continuous w.r.t $P$, i.e. $Q\ll P$. Hence you need at least

  1. $Z_\infty >0$
  2. $E[Z_\infty]=1$.

A priori, as said before, $Z_\infty=0$ and/or $E[Z_\infty]<1$. Hence we must find some conditions, such that $1.$ and $2.$ are true. For $1.$ we must have $\langle L\rangle_\infty < \infty$ (by definition of $Z_\infty=e^{L_\infty -\frac{1}{2}\langle L \rangle_\infty}$). For $2.$ you can use: $E[Z_\infty]=1 $ if and only if $Z$ is a uniformly integrable $P$ martingale on $[0,\infty]$. Now there is a famous condtion, called Novikov's condition, which gives a sufficient condition of $Z=\mathcal{E}(L)$ to be a uniformly integrable martingale on $[0,\infty]$.

Looking at your question. The theorem from Wikipedia is exactly my second statement with $X:=\int\Theta_s dW$. Note that $[W,X]=\langle W,X\rangle = \langle W,\int \Theta_s dW \rangle = \int \Theta_s d\langle W,W\rangle = \int\Theta_s ds$. Furthermore $Z:=\mathcal{E}(X)$. The whole difference between the theorem from Wikipedia and Shreve is in specifying the process $X$ further. Shreve assumes that $X$ has a particular form, i.e. an integral w.r.t to a Brownian Motion. That is the only difference.

In finance you often work on finite time horizon, i.e. on $[0,T]$. You can easily extend everything to $[0,\infty)$ by setting everything equal zero outside $[0,T]$.

Related Question