AdaGrad: Difference between revisions
Line 4: | Line 4: | ||
AdaGrad is a family of sub-gradient algorithms for stochastic optimization. The algorithms belonging to that family are similar to second-order stochastic gradient descend with an approximation for the Hessian of the optimized function. AdaGrad's name comes from '''Ada'''ptative '''Grad'''ient. Intuitively, it adapts the learning rate for each feature depending on the estimated geometry of the problem; particularly, it tends to assign higher learning rates to infrequent features, which ensures that the parameter updates rely less on frequency and more on relevance. | AdaGrad is a family of sub-gradient algorithms for stochastic optimization. The algorithms belonging to that family are similar to second-order stochastic gradient descend with an approximation for the Hessian of the optimized function. AdaGrad's name comes from '''Ada'''ptative '''Grad'''ient. Intuitively, it adapts the learning rate for each feature depending on the estimated geometry of the problem; particularly, it tends to assign higher learning rates to infrequent features, which ensures that the parameter updates rely less on frequency and more on relevance. | ||
AdaGrad was introduced by Duchi et al.<ref name=":0">Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. ''Journal of machine learning research'', ''12''(7).</ref> in a highly cited paper published in the Journal of machine learning research in 2011. It is arguably one of the most popular algorithms for machine learning (particularly for training deep neural networks) and it influenced the development of the [[Adam|Adam algorithm]]<ref>Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. ''arXiv preprint arXiv:1412.6980''.</ref>. | AdaGrad was introduced by Duchi et al.<ref name=":0">Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. ''Journal of machine learning research'', ''12''(7).</ref> in a highly cited paper published in the Journal of machine learning research in 2011. It is arguably one of the most popular algorithms for machine learning (particularly for training deep neural networks) and it influenced the development of the [[Adam|Adam algorithm]]<ref name=":1">Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. ''arXiv preprint arXiv:1412.6980''.</ref>. | ||
== Theory == | == Theory == | ||
Line 98: | Line 98: | ||
<math display="block">G_1 = \sum_{\tau=1}^1 g_1g_1^{\top} = \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} \begin{bmatrix} -19.68 & -7.68 \end{bmatrix} = \begin{bmatrix} 387.30 & 151.14\\ 151.14 & 58.98\end{bmatrix} </math>So the first parameter's update is calculated as follows: | <math display="block">G_1 = \sum_{\tau=1}^1 g_1g_1^{\top} = \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} \begin{bmatrix} -19.68 & -7.68 \end{bmatrix} = \begin{bmatrix} 387.30 & 151.14\\ 151.14 & 58.98\end{bmatrix} </math>So the first parameter's update is calculated as follows: | ||
<math display="block">\begin{bmatrix} a_2 \\ b_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} - \begin{bmatrix} 5 \cdot \frac{1}{\sqrt{387.30}} \\ 5\cdot \frac{1}{\sqrt{ 58.98}} \end{bmatrix} \odot \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} = \begin{bmatrix} 5 \\ 5\end{bmatrix}</math>This process is repetated until convergence or for a fixed number of iterations <math>T</math>. An example AdaGrad update trayectory for this example is presented in Figure 1. Note that in | <math display="block">\begin{bmatrix} a_2 \\ b_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} - \begin{bmatrix} 5 \cdot \frac{1}{\sqrt{387.30}} \\ 5\cdot \frac{1}{\sqrt{ 58.98}} \end{bmatrix} \odot \begin{bmatrix} -19.68 \\ -7.68 \end{bmatrix} = \begin{bmatrix} 5 \\ 5\end{bmatrix}</math>This process is repetated until convergence or for a fixed number of iterations <math>T</math>. An example AdaGrad update trayectory for this example is presented in Figure 1. Note that in Figure 1, the algorithm converges to a region close to <math>(a,b) = (0,20)</math>, which are the set of parameters that originated the obsevations in this example. | ||
== Applications == | == Applications == | ||
The AdaGrad algorithm | The AdaGrad family of algorithms is typically used in machine learning applications. Mainly, it is a good choice for deep learning models with sparse input features. However, it can be applied to any optimization problem with a differentiable cost function. Given its popularity and proven performance, different versions of AdaGrad are implemented in the leading deep learning frameworks like TensorFlow and PyTorch. Nevertheless, in practice, AdaGrad tends to be substituted by the use of the Adam algorithm; since, for a given choice of hyperparameters, Adam is equivalent to AdaGrad <ref name=":1" />. | ||
== Summary and Discussion == | == Summary and Discussion == |
Revision as of 16:13, 28 November 2021
Author: Daniel Villarraga (SYSEN 6800 Fall 2021)
Introduction
AdaGrad is a family of sub-gradient algorithms for stochastic optimization. The algorithms belonging to that family are similar to second-order stochastic gradient descend with an approximation for the Hessian of the optimized function. AdaGrad's name comes from Adaptative Gradient. Intuitively, it adapts the learning rate for each feature depending on the estimated geometry of the problem; particularly, it tends to assign higher learning rates to infrequent features, which ensures that the parameter updates rely less on frequency and more on relevance.
AdaGrad was introduced by Duchi et al.[1] in a highly cited paper published in the Journal of machine learning research in 2011. It is arguably one of the most popular algorithms for machine learning (particularly for training deep neural networks) and it influenced the development of the Adam algorithm[2].
Theory
The objective of AdaGrad is to minimize the expected value of a stochastic objective function, with respect to a set of parameters, given a sequence of realizations of the function. As with other sub-gradient-based methods, it achieves so by updating the parameters in the opposite direction of the sub-gradients. While standard sub-gradient methods use update rules with step-sizes that ignore the information from the past observations, AdaGrad adapts the learning rate for each parameter individually using the sequence of gradient estimates.
Definitions
: Stochastic objective function with parameters .
: Realization of stochastic objective at time step . For simplicity .
: The gradient of with respect to , formally . For simplicity, .
: Parameters at time step .
: The outer product of all previous subgradients, given by
Standard Sub-gradient Update
Standard sub-gradient algorithms update parameters according to the following rule:
AdaGrad Update
The general AdaGrad update rule is given by:
Adaptative Learning Rate Effect
An estimate for the uncentered second moment of the objective function's gradient is given by the following expression:
which is similar to the definition of matrix , used in AdaGrad's update rule. Noting that, AdaGrad adapts the learning rate for each parameter proportionally to the inverse of the gradient's variance for every parameter. This leads to the main advantages of AdaGrad:
- Parameters associated with low frequency features tend to have larger learning rates than parameters associated with high frequency features.
- Step-sizes in directions with high gradient variance are lower than the step-sizes in directions with low gradient variance. Geometrically, the step-sizes tend to decrease proportionally to the curvature of the stochastic objective function.
which favor the convergence rate of the algorithm.
Algorithm
The general version of the AdaGrad algorithm is presented in the pseudocode below. The update step within the for loop can be modified with the version that uses the diagonal of .
Regret Bound
The regret is defined as:
where is the set of optimal parameters. AdaGrad has a regret bound of order , which leads to the convergence rate of order , and the convergence guarantee (). The detailed proof and assumptions for this bound can be found in the original journal paper[1].
Numerical Example
To illustrate how the parameter updates work in AdaGrad take the following numerical example The dataset consists of random generated obsevations that follow the linear relationship:
0.39 | 9.83 |
0.10 | 2.27 |
0.30 | 5.10 |
0.35 | 6.32 |
0.85 | 15.50 |
The cost function is defined as . And an observation of the cost function at time step is given by , where are sampled from the obsevations. Finally, the subgradient is determined by:
Take a learning rate , initial parameters , and . For the first iteration of AdaGrad the subgradient is equal to:
and is:
Applications
The AdaGrad family of algorithms is typically used in machine learning applications. Mainly, it is a good choice for deep learning models with sparse input features. However, it can be applied to any optimization problem with a differentiable cost function. Given its popularity and proven performance, different versions of AdaGrad are implemented in the leading deep learning frameworks like TensorFlow and PyTorch. Nevertheless, in practice, AdaGrad tends to be substituted by the use of the Adam algorithm; since, for a given choice of hyperparameters, Adam is equivalent to AdaGrad [2].