Solved – Quasi Maximum Likelihood estimator for GARCH with jump (Compoud poisson process with normal distribution)

distributionsestimationgarchmaximum likelihoodr

I have to modify some R code to take jumps into account in my modelisation.

First I build a simulation of a Garch 1,1:

$r_t=\mu+\sqrt{h_t}\epsilon_t$

$h_t=w_0 +\alpha(r_{t-1}-\mu)^2+\beta h_{t-1} $

where $\epsilon$ follows a normal distribution.

I then try to estimate the parameters, so I build and optimise the log likelihood that way:

    garch_loglik<-function(para,x,mu){
# Parameters
omega0=para[1]
alpha=para[2]
beta=para[3]
# Volatility and loglik initialisation
loglik=0
h=var(x)
# Start of the loop
vol=c()
for (i in 2:length(x)){
h=omega0+alpha*(x[i-1]-mu)^2+beta*h
loglik=loglik+dnorm(x[i],mu,sqrt(h),log=TRUE)
}

Now I have a modified GARCH:

$r_t=\mu+\sqrt{h_t}(\epsilon_t+\sum_{i=0}^{N_{t}} x_{i,t})$

$h_t=w_0 +\alpha(r_{t-1}-\mu)^2+\beta h_{t-1}$

Where epsilon follows a normal distribution, N a poisson(0.1) distribution, $x_{i,t}$ a normal distribution(-0.1,0.3).

I have modified the simulation of my GARCH process consequently.

pois=rpois(n,0.1)
eps=rnorm(n,0,1)+ sum(rnorm(pois[i],-0.1,0.3))

But I strugle to modify the loglikelihood.

From what I understand I have to modify the line with dnorm in the loglikelihood, but I don't know how to find the distribution of a normal law + a sum of normal depending on a poisson process.

Is it possible to compute directly ? (a function to 'mix' distributions)
Do I have to build it manually ? (simulating multiple variables to find the distribution.)

Best Answer

We can show that the density of $Y_t$ knowing $F_{t-1}$ is given by:

$$ \sum_{k=0}^{\infty} \frac{\lambda^k e^{-\lambda}}{k!} \frac{1}{\sqrt{2 \pi(1+k\sigma^2)h_t^{*}}} e^{\frac{(y-(r+m_t^{*}+\sqrt{h_t^{*}}k\mu))^2}{2(1+k\sigma^2)h_t^{*}}} $$

with:

$$ m_t^{*} = m - \frac{\sqrt{h_t^{*}}\lambda\mu}{\sqrt{1+\lambda(\mu^2+\sigma^2)}} $$

$$ h_t^{*} = \frac{h_t}{1 + \lambda(\mu^2 + \sigma^2)}$$

So the following code will compute the log-likelihood of the model and can be optimized to fit the parameters to a dataset.

density_condi<-function(y,r,m,mu,lambda,h,sigma){
res=0
for(k in 0:5){
res=res+(exp(-lambda)*(lambda^k)/factorial(k))*(1/(sqrt(2*pi*(1+k*sigma^2)*h)))*exp(-((y-
(r+m+sqrt(h)*k*mu))^2)/(2*(1+k*sigma^2)*h))
}
return(res)
}
garch_gjr_jumps_loglik<-function(para,Y,r){
omega=para[1]
alpha=para[2]
beta=para[3]
theta=para[4]
mu=para[5]
sigma=para[6]
lambda=para[7]
m=para[8]
loglik=0
h=var(Y)
chi=(1+lambda*(mu^2+sigma^2))
h_star=h/chi
for (i in 2:length(Y)){
h=omega+alpha*(Y[i-1]-r-m)^2+beta*h+theta*(max(0,Y[i-1]-r-m))^2
h_star=h/chi
m_star=m-sqrt(h)*lambda*mu/(sqrt(chi))
loglik=loglik+log(density_condi(Y[i],r,m_star,mu,lambda,h_star,sigma))
}
print(-loglik)
return(-loglik)
}

FYI: on the SP500 this model give an alpha of 0.1579and a theta of -0.1635, which show the importance to modelize the leverage effect (GJR model), the use of jump is justified by the non-gaussianity of residuals in the simple GJR model.