I am a newbie to `R`

and am trying to learn it. I have some financial data which is normal distributed. Now, the Student's t-distribution is given by

$$g_v(z):= \frac{\Gamma ( \frac{v+1}{2})}{ \sqrt{ v\pi}\Gamma (\frac v2)}\left(1+\frac{z^2}{v} \right)^{- \frac{v+1}{2} },$$

and density given by

$$ g_{v, \mu ,\sigma}(z):= \frac{1}{ \sigma}g_v \left( \frac{z-\mu}{\sigma}\right) \quad \quad z \in \mathbb{R}.$$

I want to estimate those parameters $v, \mu, \sigma $ in the given model with the maximum likelihood method, so with using the function `optim`

in `R`

. (Student's t-distribution is implemented in `R`

with the `dt`

function.)

I am familiar with `optim`

but I tried for days now to figure out how to estimate those parameters. What function do I need to "put" into `optim`

, and how do I decide for the degrees-of-freedom for `dt`

?

Any tips/explanations or examples are very appreciated as I searched for similar problems and could not find what would help me.

## Best Answer

The first thing you need to do is write out the log-likelihood function for the analysis. Given observed data $z_1,...,z_n \sim \text{IID }g(\mu, \sigma, v)$ you have the log-likelihood function:

$$\begin{align} \ell_\mathbf{z}(\mu, \sigma, v) &= -\frac{n}{2} \log(\pi) + n \log \Gamma(\tfrac{v+1}{2}) - n \log \Gamma(\tfrac{v}{2}) - \frac{n}{2} \log(v) - n \log(\sigma) \\[6pt] &\quad - \frac{v+1}{2} \sum_{i=1}^n \log \bigg( 1 + \frac{(z_i-\mu)^2}{v \sigma^2} \bigg). \end{align}$$

Once you have the function you want to maximise, you find the MLE using standard calculus methods or using an automated optimisation function. In order to find the MLE using an optimisation function (e.g.,

`optim`

in`R`

) you would program this function and then apply the optimisation function using a stipulated starting point for the iteration. If you want the computation to perform better you can first determine the gradient vector and Hessian matrix for the log-likelihood function and build this into the optimisation (rather than relying on numeric estimation of these derivatives).Implementation in R:Here is some code to implement this MLE analysis in`R`

in a simple way, without analytic computation of the gradient or Hessian. First we program the log-likelihood function above, then we program a function to optimise this function for an input vector of data. In order to use unconstrained optimisation, I have chosen to parameterise the objective function in terms of the unconstrained parameter vector $(\mu, \log \sigma, \log v) \in \mathbb{R}^3$. (This is a useful trick to use when programming optimisation routines, since it makes your functions more stable and less complicated. It can also improve efficiency in some cases.)Now that we've programmed these functions, let's see if we can get reasonable estimates of some mock data generated from your model, where the true parameters are known.

As you can see, the MLE function appears to work well on this mock data, and it is able to estimate the parameters well. Have a read through this code and make sure you understand what each part is doing. Try rewriting your own code bit by bit to get it working and then compare your finished code with this code above to see how your version differs from the above.

In terms of answering your specific queries, as you can see, the input function for

`optim`

is the objective function you want to optimise. Since`optim`

minimises the objective function, the input I have used here is the negative of the log-likelihood function, which I have called`NEGLOGLIKE`

. The degrees-of-freedom is one of the variables being optmised, so we don't put in a value for this (other than a starting value for the numerical optimisation).