Solved – Variance-gamma distribution: parameter estimation

distributionsestimationmaximum likelihoodr

I have quite a general question about Variance-gamma distribution. I am interested in how to estimate it's parameters given a set of training points?

I tried to find the answer in the internet, but surprisingly managed to find only a couple of relative links:

  1. VarianceGamma R package (and it's manual)
  2. The paper which is referenced in R package manual.

I am not an expert in R, but what I saw in R package was (as I understood) maximum likelihood estimation. They perform some iterative optimisation with different methods, starting with Skew Laplace to initialise the estimator.

In the paper they first estimate mean, variance, skewness, kurtosis through moments. Then they assume that asymmetry parameter $\theta$ is small, and set $\theta^2 = \theta^3 = 0$. And after all they solve combined equations to get all the parameter estimators. That's what I understood.

So my questions are:

  1. How exactly do they perform optimisation in R package? Maybe you know the place where it is described?
  2. What is the difference in the two methods? Which one is better to use in real life (well, maximum likelihood is better, but it is much more difficult to implement and the performance is not as good)? Are the assumptions about small $\theta$ strict in the first method?
  3. Where can I read more about Variance-Gamma parameter estimation?

I would be really grateful for every relative replies, papers and links. Thank you!

Best Answer

1. In pp. 16 they mention that the package uses "BFGS" or "Nelder-Mead". The second one is the default option. Please, take a look at these links to see their differences. Optimisation is actually made on the likelihood function. This is, the fitted parameters in the output of vgFit are actually the maximum likelihood estimators.

2. It is difficult to tell which optimisation method is better in general. You can instead compare different methods and see if the results coincide using the command optim and the command vgFit. Next, I present a code for maximising the likelihood function, you can choose between 6 different optimisation methods.

library("VarianceGamma")

# Simulate 100 observations from a variance-gamma distribution with parameters (0,1,0,1)
data = rvg(100, vgC = 0, sigma = 1, theta = 0, nu = 1)

# -log-likelihood function using the Variance-Gamma package
ll = function(par){
if(par[2]>0&par[4]>0) return( - sum(log(dvg(data, vgC = par[1], sigma=par[2],   
   theta=par[3], nu = par[4]) )))
else return(Inf)}

# Direct maximisation/minimisation using the command optim
optim(c(0,1,0,1),ll,method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"))

# Maximisation using the command vgFit
vgFit(data)

The advantage of vgFit is that you do not need to specify the starting value for searching the optimum. It implements three different methods for doing so: "US", "SL" and "MoM". "SL" is the default method. I would not trust blindly on these methods, I would rather compare the results from vgFit and optim.

3. You can check the references in the manual. For example

Seneta, E. (2004). Fitting the variance-gamma model to financial data. J. Appl. Prob., 41A:177– 187.

Kotz, S, Kozubowski, T. J., and Podgórski, K. (2001). The Laplace Distribution and Generalizations. Birkhauser, Boston, 349 p.

D.B. Madan and E. Seneta (1990): The variance gamma (V.G.) model for share market returns, Journal of Business, 63, pp. 511–524.

Related Question