Adam

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Revision as of 12:54, 20 November 2020 by Nk638 (talk | contribs)
Jump to navigation Jump to search

Authors: Nicholas Kincaid (CHEME 6800 Fall 2020)
Steward: Fengqi You

Introduction

Adam is a variant of gradient descent that has become widely popular in the machine learning community. Presented in 2015, the Adam algorithm is often recommended as the default algorithm for training neural networks as it has shown improved performance over other variants of gradient descent algorithms for a wide range of problems. Adam's name is derived from adaptive moment estimation because uses estimates of the first and second moments of the gradient to perform updates, which can be seen as incorporating gradient descent with momentum (the first-order moment) and RMSProp algorithm (the second-order moment).

Background

Batch Gradient Descent

In standard batch gradient descent, the parameters, , of the objective function , are updated based on the gradient of with respect to for the entire training dataset, as



where is defined as the learning rate and is a hyper-parameter of the optimization algorithm, and is the iteration number.

Stochastic Gradient Descent

Another variant of gradient descent is stochastic gradient descent (SGD), the gradient is computed and parameters are updated as in equation 1, but for each training sample in the training set.

Mini-Batch Gradient Descent

In between batch gradient descent and stochastic gradient descent, mini-batch gradient descent computes parameters updates on the gradient computed from a subset of the training set, where the size of the subset is often referred to as the batch size. Key challenges of the standard gradient descent methods are the tendency to get stuck in local minima and/or saddle points of the objective function, as well as choosing a proper learning rate, , which can lead to poor convergence.

Adam Algorithm

The Adam algorithm first computes the gradient, of the objective function with respect to the parameters , but then computes and stores first and second order moments of the gradient, and respectively, as



where and are hyper-parameters that are . These parameters can seen as exponential decay rates of the estimated moments, as the previous value is successively multiplied by the value less than 1 in each iteration. The authors of the original paper suggest values and . In the current notation, the first iteration of the algorithm is at and both, and are initialized to zero. Since both moments are initialized to zero, at early time steps, these values are biased towards zero. To counter this, the authors proposed a corrected update to and as



Finally, the parameter update is computed as


where is a small constant for stability. The authors recommend a value of .

Numerical Example

Contour plot of the loss function showing the trajectory of Adam algorithm from the initial point
Plot showing original data points and resulting model fit from the Adam algorithm


To illustrate how updates occur in the Adam algorithm, consider a linear, least-squares regression problem formulation. The table below shows a sample data-set of student exam grades and the number of hours spent studying for the exam. The goal of this example will be to generate a linear model to predict exam grades as a function of time spent studying.

Hours Studying 9.0 4.9 1.6 1.9 7.9 2.0 11.5 3.9 1.1 1.6 5.1 8.2 7.3 10.4 11.2
Exam Grad 88.0 72.3 66.5 65.1 79.5 60.8 94.3, 66.7 65.4 63.8 68.4 82.5 75.9 87.8 85.2

The hypothesized model function will be

The cost function is defined as

Where the coefficient is used only to make the derivatives cleaner. The optimization problem can then be formulated as trying to find the values of that minimize the squared residuals of and .

For simplicity, parameters will be updated after every data point i.e. a batch size of 1. For a single data point the derivatives of the cost function with respect to and are


The initial values of will be set to [50, 1] and The learning rate, , is set to 0.1 and the suggested parameters for , , and are used. The values for the first two parameter updates are given in Table 2.

The figures to the right show the trajectory of the Adam algorithm over a contour plot of the objective function and the resulting model fit. It should be noted that the stochastic gradient descent algorithm with a learning rate of 0.1 diverges. Even with a rate of 0.01, SGD oscillates around the global minimum due to the large magnitudes of the gradient in the direction.


Application

The Adam optimization algorithm has been widely used in machine learning applications in the context of training neural networks with variety of structures and applications. In their original paper, the authors present three different training examples, logistic regression, multi-layer neural networks for classification of MNIST images, and a convectional neural network (CNN).

Variants of Adam

AdaMax

Nadam

Conclusion

Adam is a variant of the gradient descent algorithm that has been widely adopted in the machine learning community. Adam can be seen as the combination of two other variants of gradient descent, SGD with momentum and RMSProp. Adam uses estimations of the first and second-order moments of the gradient to adapt the parameter update. These moment estimations are computed via moving averages, and, of the gradient and the squared gradient respectfully. In a variety of neural network training applications, Adam has shown increased convergence and robustness over other gradient descent algorithms and is often recommended as the default optimizer for training.

References

  1. Kingma, Diederik P., and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015, pp. 1–15.
  2. Ruder, Sebastian. An Overview of Gradient Descent Optimization Algorithms, 2016, pp. 1–14, http://arxiv.org/abs/1609.04747.
  3. Tieleman, Tijmen, and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, 2012.
  4. Dozat, Timothy. Incorporating Nesterov Momentum into Adam. ICLR Workshop, no. 1, 2016, pp. 2013–16.
  5. Nesterov, Yuri. A method of solving a convex programming problem with convergence rate O(1/k^2). In Soviet Mathematics Doklady, 1983, pp. 372-376.