AdaGrad: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 56: Line 56:
=== Algorithm ===
=== Algorithm ===
The general version of the AdaGrad algorithm is presented in the pseudocode below. The update step within the for loop can be modified with the version that uses the diagonal of <math>G_t</math>.  
The general version of the AdaGrad algorithm is presented in the pseudocode below. The update step within the for loop can be modified with the version that uses the diagonal of <math>G_t</math>.  
[[File:AdaGrad Algorithm.png|center]]
[[File:AdaGrad Algorithm.png|center|546x546px]]


=== Regret Bound ===
=== Regret Bound ===
Line 66: Line 66:


== Numerical Example ==
== Numerical Example ==
[[File:AdaGrad trayectory.png|thumb|493x493px|Figure 1. This figure shows a trajectory of parameter updates for the example presented in this section using AdaGrad. |alt=]]To illustrate how the parameter updates work in AdaGrad take the following numerical example The dataset consists of random generated obsevations <math>x,y </math>  that follow the linear relationship:<math display="block">y = 20 \cdot x + \epsilon </math>where <math>\epsilon</math> is random noise. The first 5 obsevations are shown in the table below.  
[[File:AdaGrad trayectory.png|thumb|493x493px|Figure 1. This figure shows a trajectory of parameter updates for the example presented in this section using AdaGrad. |alt=]]To illustrate how the parameter updates work in AdaGrad take the following numerical example. The dataset consists of random generated obsevations <math>x,y </math>  that follow the linear relationship:<math display="block">y = 20 \cdot x + \epsilon </math>where <math>\epsilon</math> is random noise. The first 5 obsevations are shown in the table below.  
{| class="wikitable"
{| class="wikitable"
!<math>x </math>
!<math>x </math>
Line 98: Line 98:
<math display="block">G_1 = \sum_{\tau=1}^1 g_1g_1^{\top} = \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} \begin{bmatrix} -19.68 & -7.68 \end{bmatrix} = \begin{bmatrix} 387.30 & 151.14\\ 151.14 & 58.98\end{bmatrix} </math>So the first parameter's update is calculated as follows:
<math display="block">G_1 = \sum_{\tau=1}^1 g_1g_1^{\top} = \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} \begin{bmatrix} -19.68 & -7.68 \end{bmatrix} = \begin{bmatrix} 387.30 & 151.14\\ 151.14 & 58.98\end{bmatrix} </math>So the first parameter's update is calculated as follows:


<math display="block">\begin{bmatrix} a_2 \\ b_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0  \end{bmatrix} - \begin{bmatrix} 5 \cdot \frac{1}{\sqrt{387.30}} \\ 5\cdot \frac{1}{\sqrt{ 58.98}}  \end{bmatrix} \odot \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} = \begin{bmatrix} 5 \\ 5\end{bmatrix}</math>This process is repetated until convergence or for a fixed number of iterations <math>T</math>. An example AdaGrad update trayectory for this example is presented in Figure 1. Note that in Figure 1, the algorithm converges to a region close to <math>(a,b) = (0,20)</math>, which are the set of parameters that originated the obsevations in this example.
<math display="block">\begin{bmatrix} a_2 \\ b_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0  \end{bmatrix} - \begin{bmatrix} 5 \cdot \frac{1}{\sqrt{387.30}} \\ 5\cdot \frac{1}{\sqrt{ 58.98}}  \end{bmatrix} \odot \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} = \begin{bmatrix} 5 \\ 5\end{bmatrix}</math>This process is repetated until convergence or for a fixed number of iterations <math>T</math>. An example AdaGrad update trajectory for this example is presented in Figure 1. Note that in Figure 1, the algorithm converges to a region close to <math>(a,b) = (0,20)</math>, which are the set of parameters that originated the obsevations in this example.


== Applications ==
== Applications ==
The AdaGrad family of algorithms is typically used in machine learning applications. Mainly, it is a good choice for deep learning models with sparse input features. However, it can be applied to any optimization problem with a differentiable cost function. Given its popularity and proven performance, different versions of AdaGrad are implemented in the leading deep learning frameworks like TensorFlow and PyTorch. Nevertheless, in practice, AdaGrad tends to be substituted by the use of the Adam algorithm; since, for a given choice of hyperparameters, Adam is equivalent to AdaGrad <ref name=":1" />.    
The AdaGrad family of algorithms is typically used in machine learning applications. Mainly, it is a good choice for deep learning models with sparse input features<ref name=":0" />. However, it can be applied to any optimization problem with a differentiable cost function. Given its popularity and proven performance, different versions of AdaGrad are implemented in the leading deep learning frameworks like TensorFlow and PyTorch. Nevertheless, in practice, AdaGrad tends to be substituted by the use of the Adam algorithm; since, for a given choice of hyperparameters, Adam is equivalent to AdaGrad <ref name=":1" />.    


== Conclusion ==
== Conclusion ==

Revision as of 17:20, 11 December 2021

Author: Daniel Villarraga (SYSEN 6800 Fall 2021)

Introduction

AdaGrad is a family of sub-gradient algorithms for stochastic optimization. The algorithms belonging to that family are similar to second-order stochastic gradient descend with an approximation for the Hessian of the optimized function. AdaGrad's name comes from Adaptative Gradient. Intuitively, it adapts the learning rate for each feature depending on the estimated geometry of the problem; particularly, it tends to assign higher learning rates to infrequent features, which ensures that the parameter updates rely less on frequency and more on relevance.

AdaGrad was introduced by Duchi et al.[1] in a highly cited paper published in the Journal of machine learning research in 2011. It is arguably one of the most popular algorithms for machine learning (particularly for training deep neural networks) and it influenced the development of the Adam algorithm[2].

Theory

The objective of AdaGrad is to minimize the expected value of a stochastic objective function, with respect to a set of parameters, given a sequence of realizations of the function. As with other sub-gradient-based methods, it achieves so by updating the parameters in the opposite direction of the sub-gradients. While standard sub-gradient methods use update rules with step-sizes that ignore the information from the past observations, AdaGrad adapts the learning rate for each parameter individually using the sequence of gradient estimates.

Definitions

: Stochastic objective function with parameters .

: Realization of stochastic objective at time step . For simplicity .

: The gradient of with respect to , formally . For simplicity, .

: Parameters at time step .

: The outer product of all previous subgradients, given by

Standard Sub-gradient Update

Standard sub-gradient algorithms update parameters according to the following rule:

where denotes the step-size often refered as learning rate or step-size. Expanding each term on the previous equation, the vector of parameters is updated as follows:

AdaGrad Update

The general AdaGrad update rule is given by:

where is the inverse of the square root of . A simplified version of the update rule takes the diagonal elements of instead of the whole matrix:

which can be computed in linear time. In practice, a small quantity is added to each diagonal element in to avoid singularity problems, the resulting update rule is given by:

where denotes the identity matrix. An expanded form of the previous update is presented below,

where the operator denotes the Hadamard product between matrices of the same dimension, and is the element in the diagonal. From the last expression, it is clear that the update rule for AdaGrad adapts the step-size for each parameter accoding to , while standard sub-gradient methods have fixed step-size for every parameter.

Adaptative Learning Rate Effect

An estimate for the uncentered second moment of the objective function's gradient is given by the following expression:

which is similar to the definition of matrix , used in AdaGrad's update rule. Noting that, AdaGrad adapts the learning rate for each parameter proportionally to the inverse of the gradient's variance for every parameter. This leads to the main advantages of AdaGrad:

  1. Parameters associated with low frequency features tend to have larger learning rates than parameters associated with high frequency features.
  2. Step-sizes in directions with high gradient variance are lower than the step-sizes in directions with low gradient variance. Geometrically, the step-sizes tend to decrease proportionally to the curvature of the stochastic objective function.

which favor the convergence rate of the algorithm.

Algorithm

The general version of the AdaGrad algorithm is presented in the pseudocode below. The update step within the for loop can be modified with the version that uses the diagonal of .

Regret Bound

The regret is defined as:

where is the set of optimal parameters. AdaGrad has a regret bound of order , which leads to the convergence rate of order , and the convergence guarantee (). The detailed proof and assumptions for this bound can be found in the original journal paper[1].

Numerical Example

Figure 1. This figure shows a trajectory of parameter updates for the example presented in this section using AdaGrad.

To illustrate how the parameter updates work in AdaGrad take the following numerical example. The dataset consists of random generated obsevations that follow the linear relationship:

where is random noise. The first 5 obsevations are shown in the table below.

0.39 9.83
0.10 2.27
0.30 5.10
0.35 6.32
0.85 15.50

The cost function is defined as . And an observation of the cost function at time step is given by , where are sampled from the obsevations. Finally, the subgradient is determined by:

Take a learning rate , initial parameters , and . For the first iteration of AdaGrad the subgradient is equal to:

and is:

So the first parameter's update is calculated as follows:

This process is repetated until convergence or for a fixed number of iterations . An example AdaGrad update trajectory for this example is presented in Figure 1. Note that in Figure 1, the algorithm converges to a region close to , which are the set of parameters that originated the obsevations in this example.

Applications

The AdaGrad family of algorithms is typically used in machine learning applications. Mainly, it is a good choice for deep learning models with sparse input features[1]. However, it can be applied to any optimization problem with a differentiable cost function. Given its popularity and proven performance, different versions of AdaGrad are implemented in the leading deep learning frameworks like TensorFlow and PyTorch. Nevertheless, in practice, AdaGrad tends to be substituted by the use of the Adam algorithm; since, for a given choice of hyperparameters, Adam is equivalent to AdaGrad [2].  

Conclusion

AdaGrad is a family of algorithms for stochastic optimization that uses a Hessian approximation of the cost function for the update rule. It uses that information to adapt different learning rates for the parameters associated with each feature. Their main two advantages are that: it assigns more significant learning rates to parameters related to low-frequency features, and the updates in directions of high curvature tend to be lower than those in low-curvature directions. Therefore, it works especially well for problems where the input features are sparse. AdaGrad is reasonably popular in the machine learning community, and it is implemented in the primary deep learning frameworks. In practice, the Adam algorithm is usually preferred over AdaGrad since, for a given choice of hyperparameters, Adam is equivalent to AdaGrad.

References

  1. 1.0 1.1 1.2 Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7).
  2. 2.0 2.1 Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.