Solved – How does the Adam method of stochastic gradient descent work

adamgradient descentneural networksoptimization

I'm familiar with basic gradient descent algorithms for training neural networks. I've read the paper proposing Adam: ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION.

While I've definitely got some insights (at least), the paper seems to be too high level for me overall. For example, a cost function $J(\theta)$ is often a sum of many different functions, therefore a vast amount of calculations have to be made to optimize its value; stochastic gradient descents – as far as I'm understanding the topic – calculate the optimization only for a subset of the these functions. To me it is unclear, how Adam does this and why this results in a decreased training error for the whole of $J(\theta)$.

I think Adam updates its gradient by taking into account the previous gradient. They call it something like utilizing the momentum? What exactly is this momentum? According to the algorithm on page two in the paper, it is some kind of moving average, like some estimates of the first and second moments of the "regular" gradient?

Practically, I would suspect that Adam enables one to use larger effective step sizes for decreasing the gradient and therefore the training error in combination with the stochastic approximation. Thus, the resulting update vector should "jump" around more in spatial dimensions, rather describing some curve like normal gradient descent algorithms would do.

Can someone de-mystify how Adam works? Especially how it converges, specifically why Adam's method work and what exactly the benefit is?

Best Answer

The Adam paper says, "...many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions..." Here, they just mean that the objective function is a sum of errors over training examples, and training can be done on individual examples or minibatches. This is the same as in stochastic gradient descent (SGD), which is more efficient for large scale problems than batch training because parameter updates are more frequent.

As for why Adam works, it uses a few tricks.

One of these tricks is momentum, which can give faster convergence. Imagine an objective function that's shaped like a long, narrow canyon that gradually slopes toward a minimum. Say we want to minimize this function using gradient descent. If we start from some point on the canyon wall, the negative gradient will point in the direction of steepest descent, i.e. mostly toward the canyon floor. This is because the canyon walls are much steeper than the gradual slope of the canyon toward the minimum. If the learning rate (i.e. step size) is small, we could descend to the canyon floor, then follow it toward the minimum. But, progress would be slow. We could increase the learning rate, but this wouldn't change the direction of the steps. In this case, we'd overshoot the canyon floor and end up on the opposite wall. We would then repeat this pattern, oscillating from wall to wall while making slow progress toward the minimum. Momentum can help in this situation.

Momentum simply means that some fraction of the previous update is added to the current update, so that repeated updates in a particular direction compound; we build up momentum, moving faster and faster in that direction. In the case of the canyon, we'd build up momentum in the direction of the minimum, since all updates have a component in that direction. In contrast, moving back and forth across the canyon walls involves constantly reversing direction, so momentum would help to damp the oscillations in those directions.

Another trick that Adam uses is to adaptively select a separate learning rate for each parameter. Parameters that would ordinarily receive smaller or less frequent updates receive larger updates with Adam (the reverse is also true). This speeds learning in cases where the appropriate learning rates vary across parameters. For example, in deep networks, gradients can become small at early layers, and it make sense to increase learning rates for the corresponding parameters. Another benefit to this approach is that, because learning rates are adjusted automatically, manual tuning becomes less important. Standard SGD requires careful tuning (and possibly online adjustment) of learning rates, but this less true with Adam and related methods. It's still necessary to select hyperparameters, but performance is less sensitive to them than to SGD learning rates.

Related methods:

Momentum is often used with standard SGD. An improved version is called Nesterov momentum or Nesterov accelerated gradient. Other methods that use automatically tuned learning rates for each parameter include: Adagrad, RMSprop, and Adadelta. RMSprop and Adadelta solve a problem with Adagrad that could cause learning to stop. Adam is similar to RMSprop with momentum. Nadam modifies Adam to use Nesterov momentum instead of classical momentum.

References:

Kingma and Ba (2014). Adam: A Method for Stochastic Optimization.

Goodfellow et al. (2016). Deep learning, chapter 8.

Slides from Geoff Hinton's course

Dozat (2016). Incorporating Nesterov Momentum into Adam.

Related Question