I will give you an answer from my experience developing the R package
stsm that is introduced in this document. By default ithe package uses the function `optimize`

, which is a combination of a golden section search and successive parabolic interpolation. Other line search algorithms, such as the zoom line search algorithm will do the job similarly well. The algorithm in function `optimize`

is convenient because it does not require the gradient of the function to be minimized. Here is some pseudo code to compute the optimal step size:

```
fcn.step <- function(step, model, direction)
{
trial.pars <- model$pars + step * direction
-logLik(model, pars = trial.pars)
}
s <- line.search(f = fcn.step, interval = c(0, 1))
```

`fcn.step`

is the function to be minimized, the only parameter is the step size; the parameters of the model are fixed to the values from the last iteration of the BFGS optimization algorithm. The negative of the log-likelihood function is minimized with respect to the step size; `direction`

is the optimal direction vector computed at the current iteration of the BFGS algorithm, $\tilde{\pi(\psi)}$ in your notation.

It is also important the choice of the upper limit of the interval where the
optimal value of the step size `s`

is searched. By default, the package `stsm`

calculates the maximum value of `s`

that is compatible with positive values for the variance parameters updated in equation (3). If this value is lower than $1$ then it is taken as the upper bound passed to the line search algorithm, otherwise this bound is set equal to $1$. You may look at function `step.maxsize`

in the package or the function `calc.constraint`

in this implementation of the BFGS algorithm.

With this approach you can apply the BFGS algorithm without the risk of reaching a local optimum with negative variance parameters. Otherwise,
I would recommend you either the L-BFGS-B algorithm, which allows setting a lower bound equal to zero in the parameter space where the algorithm searches for the optimum; or a reparameterization of the model, e.g. `variance = theta^2`

or `variance = exp(theta)`

and maximize the likelihood with respect to `theta`

.

As regards the EM algorithm, as you mention, it is slow to converge. Strangely enough, it converges slower as it approaches the local optimum. For some insight into this issue and a modified EM algorithm you may look at this document. I am the author of this document, you may send me an e-mail if you have any questions about it or if you want a copy (the link is not always available).

Optimization of the likelihood function of state space models can be hard in practice. Some enhancements to the general-purpose optimization algorithm are recommended. For example, one of the parameters of the model can be concentrated out of the likelihood function; maximum likelihood in the frequency domain enjoys some advantages from a computational point of view and may provide good values to be used as starting values in the optimization of the time domain likelihood function.

In other word, does it mean that if I know if the function is concave
then I can tell where the global minimum error is?

Yes, you got the gist of it. You get the first two derivatives of the function, which effectively is a way to fit the parabola to the loss function. Once you know the parabola equation, you locate its minimum and jump right there, hoping that minimum would be somewhere in the vicinity. You keep repeating the procedure until you're close enough to a minimum.

There are variations of second order approaches, of course. For instance, you might know the exact second derivative (Hessian), or you might make a couple of steps and numerically estimate the curvature etc.

## Best Answer

A real valued function is convex, using the first-order condition, if the inequality below holds

$$ f(\alpha x +(1-\alpha)y) \leq \alpha f(x) +(1-\alpha) f(y) $$

It is

strictlyconvex if such inequality holds for $<$.Now, the second-order condition can only be used for twice-differentiable functions (after all you'll need to be able to compute it's second derivatives), and strict convexity is evaluted like above; convex if $\nabla_{\boldsymbol{x}}^2f(\boldsymbol{x})\succeq\mathbf{0}$ and strictly convex if $\nabla_{\boldsymbol{x}}^2f(\boldsymbol{x})>\mathbf{0}$.

Finally, the second-order condition does not overlap the first-order one, as in the case of linear functions.