Momentum: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 2: Line 2:


== Introduction ==
== Introduction ==
'''Momentum''' is an extension to the [https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent gradient descent] optimization algorithm that builds inertia in a search direction to overcome local minima and oscillation of noisy gradients<ref name=":0">https://machinelearningmastery.com/gradient-descent-with-momentum-from-scratch/</ref>. It is based on the same concept of momentum in physics. A classic example is a ball rolling down a hill that gathers enough momentum to overcome a plateau region and make it to a global minima instead of getting stuck at a local minima. Momentum adds history to the parameter updates which significantly accelerates the optimization process. Momentum controls the amount of history to include in the update equation via a hyperparameter<ref name=":0" />. This hyperparameter is a value ranging from 0 to 1. A momentum value of 0 is equivalent to gradient descent without momentum<ref name=":0" />. A higher momentum value means more gradients from the past (history) are considered<ref name=":1">https://towardsdatascience.com/gradient-descent-with-momentum-59420f626c8f</ref>.  
'''Momentum''' is an extension to the [https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent gradient descent] optimization algorithm that builds inertia in a search direction to overcome local minima and oscillation of noisy gradients<ref name=":0">https://machinelearningmastery.com/gradient-descent-with-momentum-from-scratch/</ref>. It is based on the same concept of momentum in physics. A classical example of the concept is a ball rolling down a hill that gathers enough momentum to overcome a plateau region and make it to a global minima instead of getting stuck at a local minima. Momentum adds history to the parameter updates of descent problems which significantly accelerates the optimization process. The amount of history included in the update equation is determined via a hyperparameter<ref name=":0" />. This hyperparameter is a value ranging from 0 to 1, where a momentum value of 0 is equivalent to gradient descent without momentum<ref name=":0" />. A higher momentum value means more gradients from the past (history) are considered<ref name=":1">https://towardsdatascience.com/gradient-descent-with-momentum-59420f626c8f</ref>.  
== Theory, methodology, and/or algorithmic discussions ==
== Theory, methodology, and/or algorithmic discussions ==


=== Algorithm ===
=== Algorithm ===
The main idea behind momentum is to compute an exponentially weighted average of the gradients and use that to update the weights. By taking past gradients into account, the steps to gradient descent become smoothed out, which can reduce the amount of zig zagging seen in iterations. <ref>Murphy, Kevin. ''Machine Learning''. <nowiki>https://eds-p-ebscohost-com.proxy.library.cornell.edu/eds/ebookviewer/ebook/bmxlYmtfXzQ4MDk2OF9fQU41?sid=d9abaf85-26da-4cc7-87fc-020b7509ddf7@redis&vid=0&format=EB&rid=1</nowiki></ref>   
The main idea behind momentum is to compute an exponentially weighted average of the gradients and use that to update the weights. By taking past gradients into account, the steps to gradient descent become smoothed out, which can reduce the amount of "zig zagging" seen in iterations (where the iterative solutions are going between above and below the true curve). <ref>Murphy, Kevin. ''Machine Learning''. <nowiki>https://eds-p-ebscohost-com.proxy.library.cornell.edu/eds/ebookviewer/ebook/bmxlYmtfXzQ4MDk2OF9fQU41?sid=d9abaf85-26da-4cc7-87fc-020b7509ddf7@redis&vid=0&format=EB&rid=1</nowiki></ref>   


In gradient descent (stochastic) without momentum, the update rule at each iteration is given by:
In (stochastic) gradient descent without momentum, the update rule at each iteration is given by:


''W = W - <math>\alpha</math>dW''
''W = W - <math>\alpha</math>dW''
Line 16: Line 16:
*''dW'' is the gradient indicating which direction to decrease the cost function by
*''dW'' is the gradient indicating which direction to decrease the cost function by
*''<math>\alpha</math>'' is the hyperparameter representing the learning rate<br />
*''<math>\alpha</math>'' is the hyperparameter representing the learning rate<br />
In gradient descent (stochastic) with momentum, the update rule at each iteration is given by:
In (stochastic) gradient descent with momentum, the update rule at each iteration is given by:


''V<math>dw</math> = βV<math>dw</math> + (1-β)dW''
''V<math>dw</math> = βV<math>dw</math> + (1-β)dW''
Line 26: Line 26:




Momentum can be applied to other gradient descent variations such as batch gradient descent and mini-batch gradient descent. Regardless of the gradient descent variation, the momentum hyperparameter ''β'' will need to be tuned in addition to the learning rate hyper parameter <math>\alpha</math>.  ''β'' can be any value between 0 and 1. Generally, the larger the momentum value, the more we take past gradients into account. This results in a smoother update. However, choosing a ''β'' value that is too large will result in over smoothing. Sometimes, a large ''β'' value can even overshoot the target. As the ''β'' value gets closer to 0, momentum behaves more similar to steepest descent<ref>https://www.biostat.wisc.edu/~craven/cs760/lectures/DNNs-2.pdf</ref>. A value of 0 is essentially gradient descent without momentum. A momentum value of 0.9 tends to be a good default choice<ref name=":2" />. Finding the optimal ''β'' is a matter of trial and error to see what works best for your model such that the cost function is minimized.  
Momentum can be applied to other gradient descent variations such as batch gradient descent and mini-batch gradient descent. Regardless of the gradient descent variation, the momentum hyperparameter ''β'' will need to be tuned in addition to the learning rate hyper parameter <math>\alpha</math>.  ''β'' can be any value between 0 and 1. Generally, the larger the momentum value, the more we take past gradients into account. This results in a smoother update. However, choosing a ''β'' value that is too large will result in over smoothing. Sometimes, a large ''β'' value can even overshoot the target. As the ''β'' value gets closer to 0, momentum behaves more similar to steepest descent<ref>https://www.biostat.wisc.edu/~craven/cs760/lectures/DNNs-2.pdf</ref>. A value of 0 is essentially gradient descent without momentum and a momentum value of 0.9 tends to be a good default choice<ref name=":2" />. Finding the optimal ''β'' is a matter of trial and error to see what works best for your model such that the cost function is minimized.  


=== Problems with Gradient Descent ===
=== Problems with Gradient Descent ===
With gradient descent, a weight update at time ''t'' is given by the learning rate and gradient at that exact moment<ref name=":1" />. It doesn't not consider previous steps when searching.  
With gradient descent, a weight update at time ''t'' is given by the learning rate and gradient at that exact moment<ref name=":1" />. This means that the previous steps are not considered when searching for the next iteration's solution.  
[[File:Gradient descent momentum.png|thumb|Gradient descent with and without momentum.<ref>https://cedar.buffalo.edu/~srihari/CSE676/8.3%20BasicOptimizn.pdf</ref>]]
[[File:Gradient descent momentum.png|thumb|Gradient descent with and without momentum.<ref name=":3">https://cedar.buffalo.edu/~srihari/CSE676/8.3%20BasicOptimizn.pdf</ref>]]
This results in two issues:
This results in two issues:


# Unlike convex functions, a non-convex cost function can have many local minima's meaning the first local minima found is not guaranteed to be the global minima. At the local minima, the gradient of the cost function will be very small resulting in no weight updates. Because of this, gradient descent will get stuck and fail to find the most optimal solution.
# Unlike convex functions, a non-convex cost function can have many local minima's meaning the first local minima found is not guaranteed to be the global minima. At the local minima, the gradient of the cost function will be very small resulting in no weight updates. Because of this, gradient descent will get stuck and fail to find the global optimal solution.
# Gradient descent can be noisy with many oscillations which results in a larger number of iterations needed to reach convergence.  
# Gradient descent can be noisy with many oscillations which results in a larger number of iterations needed to reach convergence.<br />
Momentum is able to solve both of these issues buy using an exponentially weighted average of the gradients to update the weights at each iteration. This method also prevents gradients of previous iterations to be weighted equally. With an exponential weighted average, recent gradients are given more weight than previous gradients.   


 
The graph to the right shows a comparison of gradient descent with and without momentum. Using momentum, the oscillations are significantly reduced and convergence to the minimum is achieved in fewer iterations.<ref name=":3" />    
 
Momentum is able to solve both of these issues buy using an exponentially weighted average of the gradients to update the weights at each iteration. This method also prevents gradients of previous iterations to be weighted equally. With an exponential weighted average, recent gradients are given more weightage than previous gradients.   
 
The graph to the right shows a comparison of gradient descent with and without momentum. Using momentum, the oscillations are significantly reduced and convergence to the minimum is achieved in fewer iterations.   


== Numerical Example ==
== Numerical Example ==
Line 88: Line 85:
== Applications ==
== Applications ==


Momentum is widely used in the machine learning community for training non-convex models such as deep neural networks<ref name=":2">https://arxiv.org/pdf/2010.00406.pdf</ref>. Empirically, momentum methods outperform traditional stochastic gradient descent approaches<ref name=":2" />. In deep learning, SGD is widely prevalent and is the underlying basis for many optimizers such as Adam, Adadelta, RMSProp, etc<ref>https://www.pyimagesearch.com/2021/05/05/gradient-descent-algorithms-and-variations/</ref>. It is usually always recommended to use momentum with SGD. The momentum extension for optimization algorithms is available in many popular machine learning frameworks such as PyTorch, tensor flow, and scikit-learn.  
Momentum is widely used in the machine learning community for training non-convex models such as deep neural networks<ref name=":2">https://arxiv.org/pdf/2010.00406.pdf</ref>. Empirically, momentum methods outperform traditional stochastic gradient descent approaches<ref name=":2" />. In deep learning, SGD is widely prevalent and is the underlying basis for many optimizers such as Adam, Adadelta, RMSProp, etc<ref>https://www.pyimagesearch.com/2021/05/05/gradient-descent-algorithms-and-variations/</ref>. The momentum extension for optimization algorithms is available in many popular machine learning frameworks such as PyTorch, tensor flow, and scikit-learn. Generally, any problem that can be solved with stochastic gradient descent can benefit from the application of momentum. These are often unconstrained optimization problems, or in line with it's application to machine learning, such as model training. Some common SGD applications where momentum may be applied are ridge and logistic regression and support vector machines.<ref>https://cseweb.ucsd.edu//~akmenon/ResearchExamTalk.pdf</ref> 
 
Momentum is also very useful in objective functions that have a large amount of curvature<ref name=":0" />. These are areas where the gradients changes a lot. Since momentum leverages an exponential weighted average, smaller steps will be taken in these regions that have higher curvature resulting in dampened oscillations.  


Momentum is also very useful in objective functions that have a large amount of curvature<ref name=":0" />. These are areas where the gradients changes a lot. Since momentum leverages an exponential weighted average, smaller steps will be taken in these regions that have higher curvature resulting in dampened oscillations. Overall, momentum may have farther reaching applicability to accelerating second order methods and deeper connections to other methods, such as the ellipsoid algorithm.<ref>https://distill.pub/2017/momentum/</ref>  
Overall, momentum may have farther reaching applicability to accelerating second order methods and deeper connections to other methods, such as the ellipsoid algorithm.<ref>https://distill.pub/2017/momentum/</ref>


== Conclusion ==
== Conclusion ==
Momentum improves on gradient descent by reducing zig zag effects, finding global instead of local optimum, and solving the problem much more rapidly. Because of these advantages, momentum is commonly used in machine learning.
Momentum improves on gradient descent by reducing zig zag effects and acting as an accelerator for optimization problem solving. Additionally, it finds the global (and not just local) optimum. Because of these advantages, momentum is commonly used in machine learning and has broad applications to all optimizers through SGD. Though the hyperparameters for momentum must be chosen with care and requires some trial and error, it ultimately addresses common issues seen in gradient descent problems. As deep learning continues to advance, momentum application will allow models and problems to be trained and solved faster than methods without.


== References ==
== References ==

Revision as of 20:36, 28 November 2021

Authors: Thomas Lee (tcl74), Greta Gasswint (gg496), and Elizabeth Henning (eh672) (SYSEN5800 Fall 2021)

Introduction

Momentum is an extension to the gradient descent optimization algorithm that builds inertia in a search direction to overcome local minima and oscillation of noisy gradients[1]. It is based on the same concept of momentum in physics. A classical example of the concept is a ball rolling down a hill that gathers enough momentum to overcome a plateau region and make it to a global minima instead of getting stuck at a local minima. Momentum adds history to the parameter updates of descent problems which significantly accelerates the optimization process. The amount of history included in the update equation is determined via a hyperparameter[1]. This hyperparameter is a value ranging from 0 to 1, where a momentum value of 0 is equivalent to gradient descent without momentum[1]. A higher momentum value means more gradients from the past (history) are considered[2].

Theory, methodology, and/or algorithmic discussions

Algorithm

The main idea behind momentum is to compute an exponentially weighted average of the gradients and use that to update the weights. By taking past gradients into account, the steps to gradient descent become smoothed out, which can reduce the amount of "zig zagging" seen in iterations (where the iterative solutions are going between above and below the true curve). [3]

In (stochastic) gradient descent without momentum, the update rule at each iteration is given by:

W = W - dW

Where:

  • W denotes the parameters to the cost function
  • dW is the gradient indicating which direction to decrease the cost function by
  • is the hyperparameter representing the learning rate

In (stochastic) gradient descent with momentum, the update rule at each iteration is given by:

V = βV + (1-β)dW

W = W - V

Where:

  • β is a new hyperparameter that denotes the momentum constant


Momentum can be applied to other gradient descent variations such as batch gradient descent and mini-batch gradient descent. Regardless of the gradient descent variation, the momentum hyperparameter β will need to be tuned in addition to the learning rate hyper parameter . β can be any value between 0 and 1. Generally, the larger the momentum value, the more we take past gradients into account. This results in a smoother update. However, choosing a β value that is too large will result in over smoothing. Sometimes, a large β value can even overshoot the target. As the β value gets closer to 0, momentum behaves more similar to steepest descent[4]. A value of 0 is essentially gradient descent without momentum and a momentum value of 0.9 tends to be a good default choice[5]. Finding the optimal β is a matter of trial and error to see what works best for your model such that the cost function is minimized.

Problems with Gradient Descent

With gradient descent, a weight update at time t is given by the learning rate and gradient at that exact moment[2]. This means that the previous steps are not considered when searching for the next iteration's solution.

Gradient descent with and without momentum.[6]

This results in two issues:

  1. Unlike convex functions, a non-convex cost function can have many local minima's meaning the first local minima found is not guaranteed to be the global minima. At the local minima, the gradient of the cost function will be very small resulting in no weight updates. Because of this, gradient descent will get stuck and fail to find the global optimal solution.
  2. Gradient descent can be noisy with many oscillations which results in a larger number of iterations needed to reach convergence.

Momentum is able to solve both of these issues buy using an exponentially weighted average of the gradients to update the weights at each iteration. This method also prevents gradients of previous iterations to be weighted equally. With an exponential weighted average, recent gradients are given more weight than previous gradients.

The graph to the right shows a comparison of gradient descent with and without momentum. Using momentum, the oscillations are significantly reduced and convergence to the minimum is achieved in fewer iterations.[6]

Numerical Example

To demonstrate the use of momentum in the context of gradient descent, minimize the following function:

SGD without Momentum

Solving this objective function using stochastic gradient descent, starting at x=-2.8 and a learning rate of 0.05, gives a solution of y=-1.46 and x=-1.59. This example uses python to construct a loss function and optimizer with PyTorch[7]. The following graph depicts how the stochastic gradient descent problem will stop at a local minimum instead of finding the global minimum of the function. The magenta line represents the objective function and the blue line shows the points at each iteration.

Stochastic gradient descent without momentum stops at a local minimum.

SGD with Momentum

By adding momentum, the solver is able to overcome the hill past the local minimum and settle in the global maximum. The following example uses a momentum value of 0.7 and the same parameters as the previous example, with learning rate of 0.05 and starting point of x=2.8. Note that momentum values less than 0.7 are too low and are unable to overcome the hill past the local minimum. Using momentum, this solver gives the correct solution for the global minimum at y=-5.61 and x=2.04, shown in the graph below. The following table lists the values for x and y for each iteration after starting at x=-2.8.

x y
1 -2.800 7.194
2 -1.885 -1.140
3 -1.126 -1.011
4 -0.676 -0.279
... ...
99 2.042 -5.608
Stochastic gradient descent with momentum stops at the global minimum.

Applications

Momentum is widely used in the machine learning community for training non-convex models such as deep neural networks[5]. Empirically, momentum methods outperform traditional stochastic gradient descent approaches[5]. In deep learning, SGD is widely prevalent and is the underlying basis for many optimizers such as Adam, Adadelta, RMSProp, etc[8]. The momentum extension for optimization algorithms is available in many popular machine learning frameworks such as PyTorch, tensor flow, and scikit-learn. Generally, any problem that can be solved with stochastic gradient descent can benefit from the application of momentum. These are often unconstrained optimization problems, or in line with it's application to machine learning, such as model training. Some common SGD applications where momentum may be applied are ridge and logistic regression and support vector machines.[9]

Momentum is also very useful in objective functions that have a large amount of curvature[1]. These are areas where the gradients changes a lot. Since momentum leverages an exponential weighted average, smaller steps will be taken in these regions that have higher curvature resulting in dampened oscillations.

Overall, momentum may have farther reaching applicability to accelerating second order methods and deeper connections to other methods, such as the ellipsoid algorithm.[10]

Conclusion

Momentum improves on gradient descent by reducing zig zag effects and acting as an accelerator for optimization problem solving. Additionally, it finds the global (and not just local) optimum. Because of these advantages, momentum is commonly used in machine learning and has broad applications to all optimizers through SGD. Though the hyperparameters for momentum must be chosen with care and requires some trial and error, it ultimately addresses common issues seen in gradient descent problems. As deep learning continues to advance, momentum application will allow models and problems to be trained and solved faster than methods without.

References