I will give you an answer from my experience developing the R package
stsm that is introduced in this document. By default ithe package uses the function optimize
, which is a combination of a golden section search and successive parabolic interpolation. Other line search algorithms, such as the zoom line search algorithm will do the job similarly well. The algorithm in function optimize
is convenient because it does not require the gradient of the function to be minimized. Here is some pseudo code to compute the optimal step size:
fcn.step <- function(step, model, direction)
{
trial.pars <- model$pars + step * direction
-logLik(model, pars = trial.pars)
}
s <- line.search(f = fcn.step, interval = c(0, 1))
fcn.step
is the function to be minimized, the only parameter is the step size; the parameters of the model are fixed to the values from the last iteration of the BFGS optimization algorithm. The negative of the log-likelihood function is minimized with respect to the step size; direction
is the optimal direction vector computed at the current iteration of the BFGS algorithm, $\tilde{\pi(\psi)}$ in your notation.
It is also important the choice of the upper limit of the interval where the
optimal value of the step size s
is searched. By default, the package stsm
calculates the maximum value of s
that is compatible with positive values for the variance parameters updated in equation (3). If this value is lower than $1$ then it is taken as the upper bound passed to the line search algorithm, otherwise this bound is set equal to $1$. You may look at function step.maxsize
in the package or the function calc.constraint
in this implementation of the BFGS algorithm.
With this approach you can apply the BFGS algorithm without the risk of reaching a local optimum with negative variance parameters. Otherwise,
I would recommend you either the L-BFGS-B algorithm, which allows setting a lower bound equal to zero in the parameter space where the algorithm searches for the optimum; or a reparameterization of the model, e.g. variance = theta^2
or variance = exp(theta)
and maximize the likelihood with respect to theta
.
As regards the EM algorithm, as you mention, it is slow to converge. Strangely enough, it converges slower as it approaches the local optimum. For some insight into this issue and a modified EM algorithm you may look at this document. I am the author of this document, you may send me an e-mail if you have any questions about it or if you want a copy (the link is not always available).
Optimization of the likelihood function of state space models can be hard in practice. Some enhancements to the general-purpose optimization algorithm are recommended. For example, one of the parameters of the model can be concentrated out of the likelihood function; maximum likelihood in the frequency domain enjoys some advantages from a computational point of view and may provide good values to be used as starting values in the optimization of the time domain likelihood function.
One simple way to see what you've described is to compare logistic regression with the equivalent two-class multinomial regression:
In the case of logistic regression, your $f(x, w)$ has one output, lets call it $\hat{y}$.
$$ p(Y=true) = \frac{1}{1 + e^{-\hat{y}}}$$
$$ p(Y=false) = 1 - p(Y=true) = \frac{e^{-\hat{y}}}{1 + e^{-\hat{y}}} $$
$$ \log({odds}) = \log{\frac{p(Y=true)}{p(Y=false)}} = \log{e^{\hat{y}}} = \hat{y} $$
In the case of two-class multinomial regression, your $f(x, w)$ now has two outputs, lets call them $\hat{y}_1$ and $\hat{y}_2$.
$$ p(Y=true) = \frac{e^{\hat{y}_1}}{e^{\hat{y}_1} + e^{\hat{y}_2}} $$
$$ p(Y=false) = \frac{e^{\hat{y}_2}}{e^{\hat{y}_1} + e^{\hat{y}_2}} $$
$$ \log(odds) = \log{\frac{p(Y=true)}{p(Y=false)}} = \log{e^{\hat{y}_1 - \hat{y}_2}} = \hat{y}_1 - \hat{y}_2 $$
In this case, the log-odds of $Y$ are represented by the difference of the two outputs: $\hat{y}_1 - \hat{y}_2$. This system is under-constrained, since you can get the same probabilities by adding or removing a constant to both $\hat{y}_1$ and $\hat{y}_2$.
It's possible to pick, e.g. $\hat{y}_1 = 0$ so that all other classes are now constrained in relation. But in practice the problem goes away once you add a prior on your $f$ such that a single parameterization (i.e. $w = 0$) is more likely a priori.
Best Answer
With linear regression, BFGS and LBFGS would be a major step backwards. That's because the solution can be directly written as
$\hat \beta = (X^T X)^{-1} X^T Y$
It's worth noting that directly using the above equation to calculate $\hat \beta$ (i.e. inverting $X^T X$ and then multiplying by $X^T Y$) is itself even a poor way to calculate $\hat \beta$.
Also, gradient descent is only recommended for linear regression in extremely special cases, so I wouldn't say gradient descent is "related" to linear regression.