Nadam: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
(dump everything)
Line 2: Line 2:


Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
NADAM
Introduction
Nesterov-accelerated Adaptive Moment Estimation (Nadam) is an optimization algorithm introduced by Timothy Dozat in his 2016 paper, “Incorporating Nesterov Momentum into Adam” [1]. Nadam enhances the Adaptive Moment Estimation (Adam) optimizer by integrating Nesterov momentum, aiming to improve convergence speed and performance in training deep learning models. This algorithm has been applied across various domains, including computer vision and natural language processing, to optimize neural network training. Gradient descent optimization algorithms are fundamental tools in machine learning, often employed as black-box optimizers to minimize loss functions by iteratively adjusting model parameters. The basic form, Gradient Descent, updates parameters by moving in the direction of the negative gradient of the loss function, aiming to find the minimum. However, this approach can be slow to converge, especially in regions with flat or elongated curvatures [2]. To address these challenges, several variants have been developed:
• Momentum incorporates inertia into updates, accelerating convergence by accumulating a decaying sum of past gradients, which helps navigate ravines and reduces oscillations [1].
• Nesterov’s Accelerated Gradient (NAG), or Nesterov momentum, improves upon classical momentum by calculating the gradient at a projected future position rather than the current one, allowing for corrective adjustments and often leading to faster convergence.
• RMSprop addresses the issue of using a single learning rate for all parameters by adapting the learning rate for each parameter individually, normalizing gradients by a decaying average of their recent magnitudes, which helps in dealing with non-stationary objectives and improves convergence [3].
• Adaptive Moment Estimation (Adam)[4] combines the benefits of momentum and adaptive learning rates by computing individual adaptive learning rates for different parameters based on estimates of first and second moments of the gradients. However, Adam can sometimes converge to suboptimal solutions due to its adaptive learning rate mechanism [3] since it has the effect of slowing down the search before reaching the global optima [2].
Nadam extends Adam by incorporating Nesterov momentum, which looks ahead at the projected future position to calculate the gradient, thereby improving convergence speed and accuracy.
Background
Gradient Descent is a first-order iterative optimization algorithm used to find a local minimum of a differentiable function. The parameter update rule at iteration t is:
\theta_{t+1}=\theta_t-\eta\nabla_\theta J\left(\theta_t\right)
Where \theta_t denotes the parameters vector at iteration t,\  \eta is the learning rate \nabla_\theta J\left(\theta_t\right) is the gradient of the cost function J with respect to \theta at iteration t. Momentum accelerates Gradient Descent by accumulating an exponentially decaying moving average of past gradients, This accelerates gradient descent along stable gradient directions and slows it in oscillating directions [1].
m_t=\gamma m_{t-1}+\eta\nabla_\theta J\left(\theta_t\right)
\theta_{t+1}=\theta_t-m_t
Here in addition to the gradient component, we have an additional term m_t the momentum vector and \gamma is the momentum coefficient. Nesterov Momentum improves upon standard Momentum by calculating the gradient at the anticipated future position, leading to a more responsive update.
m_t=\gamma m_{t-1}+\eta\nabla_\theta J\left(\theta_t-\gamma m_{t-1}\right)
\theta_{t+1}=\theta_t-m_t
Alternatively, one can apply the current look-ahead momentum vector directly to update the current parameters
m_t=\gamma m_{t-1}+\eta\nabla_\theta J\left(\theta_t\right)
\theta_{t+1}=\theta_t-{(\gamma m}_t\ +\ \eta{\ \nabla}_\theta J\left(\theta_t\right)\ )
AdaGrad (Adaptive Gradient Algorithm) adapts the learning rate for each parameter individually by scaling it inversely proportional to the square root of the sum of all historical squared values of the gradients. This method is particularly useful for dealing with sparse data [3].
v_t=v_{t-1}+\left[\nabla_\theta J\left(\theta_t\right)\right]^2
\theta_{t+1}=\theta_t-\eta\frac{\nabla_\theta J\left(\theta_t\right)}{\sqrt{v_t}+\epsilon}
v_t is the accumulated sum of squared gradients up to time t, \epsilon is a small constant for numerical stability (e.g., {10}^{-8}). AdaGrad effectively scales down the learning rate for parameters associated with frequently occurring features, allowing for larger updates for infrequent features [1]. RMSprop (Root Mean Square Propagation) is an optimization algorithm that builds upon AdaGrad by modifying the way it accumulates squared gradients. While AdaGrad accumulates all past squared gradients, leading to a continually decreasing learning rate, RMSprop introduces a decay parameter to the accumulated squared gradients v_t. This parameterization prevents the learning rate from diminishing too quickly.
v_t=\gamma v_{t-1}+\left(1-\gamma\right)\left[\nabla_\theta J\left(\theta_t\right)\right]^2
\theta_{t+1}=\theta_t-\eta\frac{\nabla_\theta J\left(\theta_t\right)}{\sqrt{v_t}+\epsilon}
\gamma is the decay rate (typically \gamma=\ 0.9). By introducing the decay parameter in the v_t term, RMSprop limits the influence of older gradients, ensuring that the algorithm adapts quickly to the most recent gradients. Adam (Adaptive Moment Estimation) is an optimization algorithm that combines ideas from both Momentum and RMSprop. It extends RMSprop by not only adapting the learning rates per parameter but also incorporating momentum by using an exponentially decaying average of past gradients (first moment). Adam is designed to work well with sparse gradients and noisy problems, which are common in training deep neural networks.
m_t=\beta_1m_{t-1}+\left(1-\beta_1\right)\nabla_\theta J\left(\theta_t\right)
v_t=\beta_2v_{t-1}+\left(1-\beta_2\right)\left[\nabla_\theta J\left(\theta_t\right)\right]^2
m_t is the first moment estimate (exponentially decaying average of past gradients), v_t is the second moment estimate (exponentially decaying average of past squared gradients), \beta_1 and \beta_2 are the exponential decay rates for the moment estimates (commonly \beta_1=0.9, \beta_2=0.999 ).
As m_t and v_t are initialized at zero vectors, they are biased towards zero, especially during the initial time steps and particularly when the decay rates are close to one (i.e.,\beta_1,\beta_2\approx1). This bias can slow down the convergence of the algorithm because the moment estimates underestimate the true mean and variance of the gradients. To counteract these biases, the authors of Adam introduced bias-corrected moment estimates:
\widehat{m_t}=\frac{m_t}{1-\beta_1^t}
\widehat{v_t}=\frac{v_t}{1-\beta_2^t}
The denominators 1-\beta_1^t and 1-\beta_2^t approach 1 as t increases, reducing the impact of the bias over time. The bias correction compensates for the initialization at zero, ensuring that the moment estimates are unbiased. The parameters are then updated using the bias-corrected moment estimates:
\theta_{t+1}=\theta_t-\eta\frac{\widehat{m_t}}{\sqrt{\widehat{v_t}}+\epsilon}
Algorithm
Nadam modifies the parameter update step of Adam by integrating the Nesterov accelerated gradient. In Adam, the update relies solely on the bias-corrected first moment estimate \widehat{m_t}.  Nadam modifies the parameter update step of Adam by integrating the Nesterov accelerated gradient. In Adam, the update relies solely on the bias-corrected first moment estimate \widehat{m_t}. Nadam, however, adjusts this estimate by adding a term that accounts for the current gradient, effectively looking ahead to where the parameters are heading. This adjustment is similar to the Nesterov momentum in NAG, which computes the gradient at the approximate future position of the parameters. Firstly, consider the bias-corrected first moment \widehat{m_t} using the definition of m_t [3]
\widehat{m_t}=\frac{m_t}{1-\beta_1^t}=\frac{\beta_1m_{t-1}+\left(1-\beta_1\right)g_t}{1-\beta_1^t}
Where g_t=\nabla_\theta J\left(\theta_t\right). The update rule of Adam can be rewritten as
\theta_{t+1}=\theta_t-\frac{\eta}{\left(\sqrt{\widehat{v_t}}+\epsilon\right)}\left[\frac{\beta_1m_{t-1}}{\left(1-\beta_1^t\right)}+\frac{\left(1-\beta_1\right)g_t}{\left(1-\beta_1^t\right)}\right]
Recognizing that \frac{\beta_1m_{t-1}}{1-\beta_1^t} is the bias-corrected estimate of the momentum vector from the previous time step, we denote:
\widehat{m_{t-1}}=\frac{m_{t-1}}{1-\beta_1^{t-1}}
However, since \beta_1^{t-1} is close to \beta_1^t for large t, we approximate:
\frac{\beta_1m_{t-1}}{1-\beta_1^t}\approx\beta_1\widehat{m_{t-1}}
Thus, the update rule becomes:
\theta_{t+1}=\theta_t-\eta\frac{\beta_1\widehat{m_{t-1}}+\frac{\left(1-\beta_1\right)g_t}{1-\beta_1^t}}{\sqrt{\widehat{v_t}}+\epsilon}
This form resembles the momentum term in Nesterov Accelerated Gradient (NAG). To incorporate Nesterov momentum into Adam, we replace the previous bias-corrected momentum estimate \widehat{m_{t-1}} with the current estimate \widehat{m_t}  . This aligns with the NAG approach of using the look-ahead momentum vector for updating parameters.
The Nadam update rule becomes:
\theta_{t+1}=\theta_t-\eta\frac{\beta_1\widehat{m_t}+\frac{\left(1-\beta_1\right)g_t}{1-\beta_1^t}}{\sqrt{\widehat{v_t}}+\epsilon}
By combining the bias-corrected first moment estimate \widehat{m_t} and the current gradient g_t , weighted appropriately, Nadam aims to improve the responsiveness of the optimizer to recent gradient information, potentially enhancing convergence speed.
The pseudocode for Nadam is as follows
Algorithm Nadam Optimization
Input: Initial parameters \theta_0, learning rate \eta, decay rates \beta_1, \beta_2, small constant \epsilon
Initialize m_0\gets0,v_0\gets0,t\gets0
While not converged do
t\ \gets t\ +\ 1
Compute gradient: g_t=\nabla_\theta J\left(\theta_{t-1}\right)
Update biased first moment: m_t\gets\beta_1\cdot m_{t-1}+\left(1-\beta_1\right)\cdot g_t
Update biased second moment: v_t\gets\beta_2\cdot v_{t-1}+\left(1-\beta_2\right)\cdot g_t^2
Bias correction: \widehat{m_t}\gets\frac{m_t}{1-\beta_1^t}, \widehat{v_t}\gets\frac{v_t}{1-\beta_2^t}
Nesterov adjustment: m_t’←β1⋅mt+1-β1⋅gt1-β1t
Update parameters: \theta_t\gets\theta_{t-1}-\eta\cdotmt’vt+ϵ
Output: Optimized parameters \theta_t
Numerical Examples
In this section, we provide numerical examples to illustrate how the Nadam algorithm operates. We choose a simple quadratic function to make the calculations straightforward and easy to follow, and then we demonstrate the code implementation for a linear regression task.
Minimizing a Quadratic Function
Consider the cost function:
J\left(\theta\right)=\frac{1}{2}\theta^2
which has its global minimum at \theta=\ 0. We aim to minimize this function using the Nadam optimizer, starting from an initial parameter value of \theta_0=5.
We set the hyperparameters as follows: learning rate \eta=\ 0.1, exponential decay rates \beta_1,\beta_2=0.9, numerical stability constant \epsilon={10}^{-8} and initialize the first and second moment estimates and the time step as m_0=0, v_0=0, t=0
Iteration 1
1. Time Step Update: t\ =\ t\ +\ 1\ =\ 1
2. Compute Gradient at Current Parameter:\ g_1=\nabla_\theta J\left(\theta_0\right)=\theta_0=5\
3. Update Biased First Moment Estimate:
\ m_1=\beta_1m_0+\left(1-\beta_1\right)g_1=0.9\times0+0.1\times5=0.5\
4. Update Biased Second Moment Estimate:
            v_1=\beta_2v_0+\left(1-\beta_2\right)g_1^2=0.999\times0+0.001\times25=0.025\
5. Compute Bias-Corrected Estimates:
            \widehat{m_1}=\frac{m_1}{1-\beta_1^t}=\frac{0.5}{1-{0.9}^1}=\frac{0.5}{0.1}=5
          \widehat{v_1}=\frac{v_1}{1-\beta_2^t}=\frac{0.025}{1-{0.999}^1}=\frac{0.025}{0.001}=25\
6. Nesterov Adjustment for First Moment:
\ m\prime_1=\beta_1\widehat{m_1}+\frac{\left(1-\beta_1\right)g_1}{1-\beta_1^t}=0.9\times5+\frac{0.1\times5}{0.1}=4.5+5=9.5\
7. Parameter Update:
\ \theta_1=\theta_0-\eta\frac{m\prime_1}{\sqrt{\widehat{v_1}}+\epsilon}=5-0.1\times\frac{9.5}{\sqrt{25}+{10}^{-8}}=5-0.1\times\frac{9.5}{5}=5-0.1\times1.9=5-0.19=4.81\
Iteration 2
1. Time Step Update:
t = t + 1 = 2
2. Compute Gradient at Current Parameter:
\ g_2=\nabla_\theta J\left(\theta_1\right)=\theta_1=4.81
3. Update Biased First Moment Estimate:
\ m_2=\beta_1m_1+\left(1-\beta_1\right)g_2=0.9\times0.5+0.1\times4.81=0.45+0.481=0.931\
4. Update Biased Second Moment Estimate:
\ v_2=\beta_2v_1+\left(1-\beta_2\right)g_2^2=0.999\times0.025+0.001\times\left(4.81\right)^2=0.024975+0.001\times23.1361=0.024975+0.0231361=0.0481111\
5. Compute Bias-Corrected Estimates:
\widehat{m_2}=\frac{m_2}{1-\beta_1^t}=\frac{0.931}{1-{0.9}^2}=\frac{0.931}{1-0.81}=\frac{0.931}{0.19}\approx4.9005
\widehat{v_2}=\frac{v_2}{1-\beta_2^t}=\frac{0.0481111}{1-{0.999}^2}=\frac{0.0481111}{1-0.998001}=\frac{0.0481111}{0.001999}\approx24.0696\
6. Nesterov Adjustment for First Moment:
\ m\prime_2=\beta_1\widehat{m_2}+\frac{\left(1-\beta_1\right)g_2}{1-\beta_1^t}=0.9\times4.9005+\frac{0.1\times4.81}{0.19}=4.41045+\frac{0.481}{0.19}\approx4.41045+2.5316=6.9421\
7. Parameter Update:
\ \theta_2=\theta_1-\eta\frac{m\prime_2}{\sqrt{\widehat{v_2}}+\epsilon}=4.81-0.1\times\frac{6.9421}{\sqrt{24.0696}+{10}^{-8}}=4.81-0.1\times\frac{6.9421}{4.9061}=4.81-0.1\times1.4142=4.81-0.14142=4.6686\
Iteration 3
1. Time Step Update:
t = t + 1 = 3
2. Compute Gradient at Current Parameter:
\ g_3=\nabla_\theta J\left(\theta_2\right)=\theta_2=4.6686\
3. Update Biased First Moment Estimate:
m_3=\beta_1m_2+\left(1-\beta_1\right)g_3=0.9\times0.931+0.1\times4.6686=0.8379+0.46686=1.30476
4. Update Biased Second Moment Estimate:
v_3=\beta_2v_2+\left(1-\beta_2\right)g_3^2=0.999\times0.0481111+0.001\times\left(4.6686\right)^2=0.048063+0.001\times21.7899=0.048063+0.0217899=0.0698529\
5. Compute Bias-Corrected Estimates:
\widehat{m_3}=\frac{m_3}{1-\beta_1^t}=\frac{1.30476}{1-{0.9}^3}=\frac{1.30476}{1-0.729}=\frac{1.30476}{0.271}\approx4.8134\
\widehat{v_3}=\frac{v_3}{1-\beta_2^t}=\frac{0.0698529}{1-{0.999}^3}=\frac{0.0698529}{1-0.997002999}=\frac{0.0698529}{0.002997001}\approx23.3132\
6. Nesterov Adjustment for First Moment:
\ m\prime_3=\beta_1\widehat{m_3}+\frac{\left(1-\beta_1\right)g_3}{1-\beta_1^t}=0.9\times4.8134+\frac{0.1\times4.6686}{0.271}=4.3321+\frac{0.46686}{0.271}\approx4.3321+1.7227=6.0548\
7. Parameter Update:
\ \theta_3=\theta_2-\eta\frac{m\prime_3}{\sqrt{\widehat{v_3}}+\epsilon}=4.6686-0.1\times\frac{6.0548}{\sqrt{23.3132}+{10}^{-8}}=4.6686-0.1\times\frac{6.0548}{4.8284}=4.6686-0.1\times1.2547=4.6686-0.12547=4.5431
After three iterations, the parameter values have been updated as follows:
\ \theta_1=4.81\
\ \theta_2=4.6686\
\ \theta_3=4.5431\
The parameter  \theta is gradually moving towards the global minimum at  \theta=\ 0 , demonstrating how the Nadam optimizer adjusts the parameters efficiently.
Linear Regression Task
We’ll use a simple linear regression task where we predict the price of a house based on its size. We’ll define the hypothesis function, cost function, optimization problem, and provide Python code to implement and visualize Nadam on the dataset.
Dataset
Consider the following dataset representing the size (in square feet) and price (in thousands of dollars) of several houses:
Size (sq ft) 50 60 70 80 90 100 110 120 130 140 150
Price ($K) 150 180 200 240 300 330 360 400 450 500 550
We aim to model the relationship between the size of a house and its price using a linear hypothesis function:
f_\theta\left(x\right)=\theta_0+\theta_1x
The cost function measures the discrepancy between the predicted prices and the actual prices. We’ll use the Mean Squared Error (MSE) as the cost function:
J\left(\theta\right)=\frac{1}{2}\sum_{i}^{n}\left(f_\theta\left(x_i\right)-y_i\right)^2
The goal is to find the optimal parameters \theta=\left(\theta_0,\theta_1\right) that minimize the cost function:
\mathrm{argmi}\mathrm{n}_\mathrm{\theta}\frac{1}{n}\sum_{i}^{n}\left(f_\theta\left(x_i\right)-y_i\right)^2
To minimize the cost function using gradient-based optimization, we compute the partial derivatives with respect to \theta_0\ and\ \theta_1:
\frac{\partial J\left(\theta\right)}{\partial\theta_0}=\sum_{i=1}^{n}\left(f_\theta\left(x_i\right)-y_i\right)
\frac{\partial J\left(\theta\right)}{\partial\theta_1}=\sum_{i=1}^{n}{\left(f_\theta\left(x_i\right)-y_i\right)x_i}
Python code
Applications
Nadam’s integration of adaptive moment estimation with Nesterov accelerated gradients makes it a powerful tool for both convex and non-convex optimization problems, particularly in training complex neural networks. It has been widely applied in diverse domains: for instance, Nadam has been used to optimize convolutional neural networks (CNNs) in tasks like image denoising [5] and human chromosome classification [6], demonstrating enhanced performance and faster convergence. In the context of time-series prediction, Nadam has been employed to optimize Long Short-Term Memory (LSTM) networks for predicting pressure in coal mining working faces [7]. Furthermore, it has shown effectiveness in text sentiment analysis by training LSTMs, where its combination with L2 regularization achieved higher accuracy in fewer epochs [8]. Nadam’s robustness and adaptability have led to its integration into major deep learning libraries such as TensorFlow [9] and PyTorch [10], making it a readily accessible optimizer for a wide range of machine learning tasks.
Conclusion
Nadam combines the adaptive learning rate adjustment of Adam with the anticipatory gradient updates of Nesterov momentum, making it a powerful extension of Adam. By incorporating the Nesterov approach, Nadam calculates gradients at a projected future position, which allows for more responsive and accurate updates, improving convergence speed and stability. This makes Nadam a robust tool for optimization tasks, particularly for training deep neural networks in both convex and non-convex settings. While Nadam has been effectively applied in areas like image classification, time-series prediction, and natural language processing, its suitability for specific problem types remains a topic for further exploration and research. Overall, Nadam offers several advantages, including faster convergence, improved handling of noisy gradients, and better generalization compared to traditional methods like SGD, Momentum, or Adam. These strengths make Nadam a versatile and reliable optimizer in modern machine learning applications.
References
[1] T. Dozat, “Incorporating Nesterov Momentum into Adam”.
[2] J. Brownlee, “Gradient Descent Optimization With Nadam From Scratch,” MachineLearningMastery.com. Accessed: Oct. 27, 2024. [Online]. Available: https://www.machinelearningmastery.com/gradient-descent-optimization-with-nadam-from-scratch/
[3] S. Ruder, “An overview of gradient descent optimization algorithms,” Jun. 15, 2017, arXiv: arXiv:1609.04747. doi: 10.48550/arXiv.1609.04747.
[4] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Jan. 30, 2017, arXiv: arXiv:1412.6980. doi: 10.48550/arXiv.1412.6980.
[5] L. Fan and Z. Long, “Optimization of Nadam algorithm for image denoising based on convolutional neural network,” in 2020 7th International Conference on Information Science and Control Engineering (ICISCE), Dec. 2020, pp. 957–961. doi: 10.1109/ICISCE50968.2020.00197.
[6] D. Menaka and S. G. Vaidyanathan, “Chromenet: a CNN architecture with comparison of optimizers for classification of human chromosome images,” Multidimens. Syst. Signal Process., vol. 33, no. 3, pp. 747–768, Sep. 2022, doi: 10.1007/s11045-022-00819-x.
[7] J. Lu, Z. Liu, W. Zhang, J. Zheng, and C. Han, “Pressure Prediction Study of Coal Mining Working Face Based on Nadam-LSTM,” IEEE Access, vol. 11, pp. 83867–83880, 2023, doi: 10.1109/ACCESS.2023.3302516.
[8] J. Wang and Z. Cao, “Chinese text sentiment analysis using LSTM network based on L2 and Nadam,” in 2017 IEEE 17th International Conference on Communication Technology (ICCT), Oct. 2017, pp. 1891–1895. doi: 10.1109/ICCT.2017.8359958.
[9] “tf.keras.optimizers.Nadam | TensorFlow v2.16.1,” TensorFlow. Accessed: Dec. 03, 2024. [Online]. Available: https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Nadam
[10] “NAdam — PyTorch 2.5 documentation.” Accessed: Dec. 03, 2024. [Online]. Available: https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html?utm_source=chatgpt.com

Revision as of 13:33, 14 December 2024

Author: Vinamr Jain (vj89) (ChemE 6800 Fall 2024)

Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu


NADAM

Introduction Nesterov-accelerated Adaptive Moment Estimation (Nadam) is an optimization algorithm introduced by Timothy Dozat in his 2016 paper, “Incorporating Nesterov Momentum into Adam” [1]. Nadam enhances the Adaptive Moment Estimation (Adam) optimizer by integrating Nesterov momentum, aiming to improve convergence speed and performance in training deep learning models. This algorithm has been applied across various domains, including computer vision and natural language processing, to optimize neural network training. Gradient descent optimization algorithms are fundamental tools in machine learning, often employed as black-box optimizers to minimize loss functions by iteratively adjusting model parameters. The basic form, Gradient Descent, updates parameters by moving in the direction of the negative gradient of the loss function, aiming to find the minimum. However, this approach can be slow to converge, especially in regions with flat or elongated curvatures [2]. To address these challenges, several variants have been developed: • Momentum incorporates inertia into updates, accelerating convergence by accumulating a decaying sum of past gradients, which helps navigate ravines and reduces oscillations [1]. • Nesterov’s Accelerated Gradient (NAG), or Nesterov momentum, improves upon classical momentum by calculating the gradient at a projected future position rather than the current one, allowing for corrective adjustments and often leading to faster convergence. • RMSprop addresses the issue of using a single learning rate for all parameters by adapting the learning rate for each parameter individually, normalizing gradients by a decaying average of their recent magnitudes, which helps in dealing with non-stationary objectives and improves convergence [3]. • Adaptive Moment Estimation (Adam)[4] combines the benefits of momentum and adaptive learning rates by computing individual adaptive learning rates for different parameters based on estimates of first and second moments of the gradients. However, Adam can sometimes converge to suboptimal solutions due to its adaptive learning rate mechanism [3] since it has the effect of slowing down the search before reaching the global optima [2].

Nadam extends Adam by incorporating Nesterov momentum, which looks ahead at the projected future position to calculate the gradient, thereby improving convergence speed and accuracy.

Background Gradient Descent is a first-order iterative optimization algorithm used to find a local minimum of a differentiable function. The parameter update rule at iteration t is: \theta_{t+1}=\theta_t-\eta\nabla_\theta J\left(\theta_t\right) Where \theta_t denotes the parameters vector at iteration t,\ \eta is the learning rate \nabla_\theta J\left(\theta_t\right) is the gradient of the cost function J with respect to \theta at iteration t. Momentum accelerates Gradient Descent by accumulating an exponentially decaying moving average of past gradients, This accelerates gradient descent along stable gradient directions and slows it in oscillating directions [1]. m_t=\gamma m_{t-1}+\eta\nabla_\theta J\left(\theta_t\right) \theta_{t+1}=\theta_t-m_t Here in addition to the gradient component, we have an additional term m_t the momentum vector and \gamma is the momentum coefficient. Nesterov Momentum improves upon standard Momentum by calculating the gradient at the anticipated future position, leading to a more responsive update. m_t=\gamma m_{t-1}+\eta\nabla_\theta J\left(\theta_t-\gamma m_{t-1}\right) \theta_{t+1}=\theta_t-m_t Alternatively, one can apply the current look-ahead momentum vector directly to update the current parameters m_t=\gamma m_{t-1}+\eta\nabla_\theta J\left(\theta_t\right) \theta_{t+1}=\theta_t-{(\gamma m}_t\ +\ \eta{\ \nabla}_\theta J\left(\theta_t\right)\ ) AdaGrad (Adaptive Gradient Algorithm) adapts the learning rate for each parameter individually by scaling it inversely proportional to the square root of the sum of all historical squared values of the gradients. This method is particularly useful for dealing with sparse data [3]. v_t=v_{t-1}+\left[\nabla_\theta J\left(\theta_t\right)\right]^2 \theta_{t+1}=\theta_t-\eta\frac{\nabla_\theta J\left(\theta_t\right)}{\sqrt{v_t}+\epsilon} v_t is the accumulated sum of squared gradients up to time t, \epsilon is a small constant for numerical stability (e.g., {10}^{-8}). AdaGrad effectively scales down the learning rate for parameters associated with frequently occurring features, allowing for larger updates for infrequent features [1]. RMSprop (Root Mean Square Propagation) is an optimization algorithm that builds upon AdaGrad by modifying the way it accumulates squared gradients. While AdaGrad accumulates all past squared gradients, leading to a continually decreasing learning rate, RMSprop introduces a decay parameter to the accumulated squared gradients v_t. This parameterization prevents the learning rate from diminishing too quickly. v_t=\gamma v_{t-1}+\left(1-\gamma\right)\left[\nabla_\theta J\left(\theta_t\right)\right]^2 \theta_{t+1}=\theta_t-\eta\frac{\nabla_\theta J\left(\theta_t\right)}{\sqrt{v_t}+\epsilon} \gamma is the decay rate (typically \gamma=\ 0.9). By introducing the decay parameter in the v_t term, RMSprop limits the influence of older gradients, ensuring that the algorithm adapts quickly to the most recent gradients. Adam (Adaptive Moment Estimation) is an optimization algorithm that combines ideas from both Momentum and RMSprop. It extends RMSprop by not only adapting the learning rates per parameter but also incorporating momentum by using an exponentially decaying average of past gradients (first moment). Adam is designed to work well with sparse gradients and noisy problems, which are common in training deep neural networks. m_t=\beta_1m_{t-1}+\left(1-\beta_1\right)\nabla_\theta J\left(\theta_t\right) v_t=\beta_2v_{t-1}+\left(1-\beta_2\right)\left[\nabla_\theta J\left(\theta_t\right)\right]^2 m_t is the first moment estimate (exponentially decaying average of past gradients), v_t is the second moment estimate (exponentially decaying average of past squared gradients), \beta_1 and \beta_2 are the exponential decay rates for the moment estimates (commonly \beta_1=0.9, \beta_2=0.999 ). As m_t and v_t are initialized at zero vectors, they are biased towards zero, especially during the initial time steps and particularly when the decay rates are close to one (i.e.,\beta_1,\beta_2\approx1). This bias can slow down the convergence of the algorithm because the moment estimates underestimate the true mean and variance of the gradients. To counteract these biases, the authors of Adam introduced bias-corrected moment estimates: \widehat{m_t}=\frac{m_t}{1-\beta_1^t} \widehat{v_t}=\frac{v_t}{1-\beta_2^t} The denominators 1-\beta_1^t and 1-\beta_2^t approach 1 as t increases, reducing the impact of the bias over time. The bias correction compensates for the initialization at zero, ensuring that the moment estimates are unbiased. The parameters are then updated using the bias-corrected moment estimates: \theta_{t+1}=\theta_t-\eta\frac{\widehat{m_t}}{\sqrt{\widehat{v_t}}+\epsilon} Algorithm Nadam modifies the parameter update step of Adam by integrating the Nesterov accelerated gradient. In Adam, the update relies solely on the bias-corrected first moment estimate \widehat{m_t}. Nadam modifies the parameter update step of Adam by integrating the Nesterov accelerated gradient. In Adam, the update relies solely on the bias-corrected first moment estimate \widehat{m_t}. Nadam, however, adjusts this estimate by adding a term that accounts for the current gradient, effectively looking ahead to where the parameters are heading. This adjustment is similar to the Nesterov momentum in NAG, which computes the gradient at the approximate future position of the parameters. Firstly, consider the bias-corrected first moment \widehat{m_t} using the definition of m_t [3] \widehat{m_t}=\frac{m_t}{1-\beta_1^t}=\frac{\beta_1m_{t-1}+\left(1-\beta_1\right)g_t}{1-\beta_1^t} Where g_t=\nabla_\theta J\left(\theta_t\right). The update rule of Adam can be rewritten as \theta_{t+1}=\theta_t-\frac{\eta}{\left(\sqrt{\widehat{v_t}}+\epsilon\right)}\left[\frac{\beta_1m_{t-1}}{\left(1-\beta_1^t\right)}+\frac{\left(1-\beta_1\right)g_t}{\left(1-\beta_1^t\right)}\right] Recognizing that \frac{\beta_1m_{t-1}}{1-\beta_1^t} is the bias-corrected estimate of the momentum vector from the previous time step, we denote: \widehat{m_{t-1}}=\frac{m_{t-1}}{1-\beta_1^{t-1}} However, since \beta_1^{t-1} is close to \beta_1^t for large t, we approximate: \frac{\beta_1m_{t-1}}{1-\beta_1^t}\approx\beta_1\widehat{m_{t-1}} Thus, the update rule becomes: \theta_{t+1}=\theta_t-\eta\frac{\beta_1\widehat{m_{t-1}}+\frac{\left(1-\beta_1\right)g_t}{1-\beta_1^t}}{\sqrt{\widehat{v_t}}+\epsilon} This form resembles the momentum term in Nesterov Accelerated Gradient (NAG). To incorporate Nesterov momentum into Adam, we replace the previous bias-corrected momentum estimate \widehat{m_{t-1}} with the current estimate \widehat{m_t} . This aligns with the NAG approach of using the look-ahead momentum vector for updating parameters. The Nadam update rule becomes: \theta_{t+1}=\theta_t-\eta\frac{\beta_1\widehat{m_t}+\frac{\left(1-\beta_1\right)g_t}{1-\beta_1^t}}{\sqrt{\widehat{v_t}}+\epsilon} By combining the bias-corrected first moment estimate \widehat{m_t} and the current gradient g_t , weighted appropriately, Nadam aims to improve the responsiveness of the optimizer to recent gradient information, potentially enhancing convergence speed. The pseudocode for Nadam is as follows Algorithm Nadam Optimization Input: Initial parameters \theta_0, learning rate \eta, decay rates \beta_1, \beta_2, small constant \epsilon Initialize m_0\gets0,v_0\gets0,t\gets0 While not converged do t\ \gets t\ +\ 1 Compute gradient: g_t=\nabla_\theta J\left(\theta_{t-1}\right) Update biased first moment: m_t\gets\beta_1\cdot m_{t-1}+\left(1-\beta_1\right)\cdot g_t Update biased second moment: v_t\gets\beta_2\cdot v_{t-1}+\left(1-\beta_2\right)\cdot g_t^2 Bias correction: \widehat{m_t}\gets\frac{m_t}{1-\beta_1^t}, \widehat{v_t}\gets\frac{v_t}{1-\beta_2^t} Nesterov adjustment: m_t’←β1⋅mt+1-β1⋅gt1-β1t Update parameters: \theta_t\gets\theta_{t-1}-\eta\cdotmt’vt+ϵ Output: Optimized parameters \theta_t Numerical Examples In this section, we provide numerical examples to illustrate how the Nadam algorithm operates. We choose a simple quadratic function to make the calculations straightforward and easy to follow, and then we demonstrate the code implementation for a linear regression task. Minimizing a Quadratic Function Consider the cost function: J\left(\theta\right)=\frac{1}{2}\theta^2 which has its global minimum at \theta=\ 0. We aim to minimize this function using the Nadam optimizer, starting from an initial parameter value of \theta_0=5. We set the hyperparameters as follows: learning rate \eta=\ 0.1, exponential decay rates \beta_1,\beta_2=0.9, numerical stability constant \epsilon={10}^{-8} and initialize the first and second moment estimates and the time step as m_0=0, v_0=0, t=0 Iteration 1 1. Time Step Update: t\ =\ t\ +\ 1\ =\ 1 2. Compute Gradient at Current Parameter:\ g_1=\nabla_\theta J\left(\theta_0\right)=\theta_0=5\ 3. Update Biased First Moment Estimate: \ m_1=\beta_1m_0+\left(1-\beta_1\right)g_1=0.9\times0+0.1\times5=0.5\ 4. Update Biased Second Moment Estimate:

           v_1=\beta_2v_0+\left(1-\beta_2\right)g_1^2=0.999\times0+0.001\times25=0.025\ 

5. Compute Bias-Corrected Estimates:

           \widehat{m_1}=\frac{m_1}{1-\beta_1^t}=\frac{0.5}{1-{0.9}^1}=\frac{0.5}{0.1}=5
		           \widehat{v_1}=\frac{v_1}{1-\beta_2^t}=\frac{0.025}{1-{0.999}^1}=\frac{0.025}{0.001}=25\ 

6. Nesterov Adjustment for First Moment: \ m\prime_1=\beta_1\widehat{m_1}+\frac{\left(1-\beta_1\right)g_1}{1-\beta_1^t}=0.9\times5+\frac{0.1\times5}{0.1}=4.5+5=9.5\ 7. Parameter Update: \ \theta_1=\theta_0-\eta\frac{m\prime_1}{\sqrt{\widehat{v_1}}+\epsilon}=5-0.1\times\frac{9.5}{\sqrt{25}+{10}^{-8}}=5-0.1\times\frac{9.5}{5}=5-0.1\times1.9=5-0.19=4.81\ Iteration 2

1. Time Step Update:

t = t + 1 = 2 

2. Compute Gradient at Current Parameter: \ g_2=\nabla_\theta J\left(\theta_1\right)=\theta_1=4.81 3. Update Biased First Moment Estimate: \ m_2=\beta_1m_1+\left(1-\beta_1\right)g_2=0.9\times0.5+0.1\times4.81=0.45+0.481=0.931\ 4. Update Biased Second Moment Estimate: \ v_2=\beta_2v_1+\left(1-\beta_2\right)g_2^2=0.999\times0.025+0.001\times\left(4.81\right)^2=0.024975+0.001\times23.1361=0.024975+0.0231361=0.0481111\ 5. Compute Bias-Corrected Estimates:

\widehat{m_2}=\frac{m_2}{1-\beta_1^t}=\frac{0.931}{1-{0.9}^2}=\frac{0.931}{1-0.81}=\frac{0.931}{0.19}\approx4.9005

\widehat{v_2}=\frac{v_2}{1-\beta_2^t}=\frac{0.0481111}{1-{0.999}^2}=\frac{0.0481111}{1-0.998001}=\frac{0.0481111}{0.001999}\approx24.0696\ 6. Nesterov Adjustment for First Moment: \ m\prime_2=\beta_1\widehat{m_2}+\frac{\left(1-\beta_1\right)g_2}{1-\beta_1^t}=0.9\times4.9005+\frac{0.1\times4.81}{0.19}=4.41045+\frac{0.481}{0.19}\approx4.41045+2.5316=6.9421\ 7. Parameter Update: \ \theta_2=\theta_1-\eta\frac{m\prime_2}{\sqrt{\widehat{v_2}}+\epsilon}=4.81-0.1\times\frac{6.9421}{\sqrt{24.0696}+{10}^{-8}}=4.81-0.1\times\frac{6.9421}{4.9061}=4.81-0.1\times1.4142=4.81-0.14142=4.6686\ Iteration 3

1. Time Step Update:

t = t + 1 = 3 

2. Compute Gradient at Current Parameter: \ g_3=\nabla_\theta J\left(\theta_2\right)=\theta_2=4.6686\ 3. Update Biased First Moment Estimate:

m_3=\beta_1m_2+\left(1-\beta_1\right)g_3=0.9\times0.931+0.1\times4.6686=0.8379+0.46686=1.30476

4. Update Biased Second Moment Estimate:

v_3=\beta_2v_2+\left(1-\beta_2\right)g_3^2=0.999\times0.0481111+0.001\times\left(4.6686\right)^2=0.048063+0.001\times21.7899=0.048063+0.0217899=0.0698529\ 

5. Compute Bias-Corrected Estimates: \widehat{m_3}=\frac{m_3}{1-\beta_1^t}=\frac{1.30476}{1-{0.9}^3}=\frac{1.30476}{1-0.729}=\frac{1.30476}{0.271}\approx4.8134\ \widehat{v_3}=\frac{v_3}{1-\beta_2^t}=\frac{0.0698529}{1-{0.999}^3}=\frac{0.0698529}{1-0.997002999}=\frac{0.0698529}{0.002997001}\approx23.3132\ 6. Nesterov Adjustment for First Moment: \ m\prime_3=\beta_1\widehat{m_3}+\frac{\left(1-\beta_1\right)g_3}{1-\beta_1^t}=0.9\times4.8134+\frac{0.1\times4.6686}{0.271}=4.3321+\frac{0.46686}{0.271}\approx4.3321+1.7227=6.0548\ 7. Parameter Update: \ \theta_3=\theta_2-\eta\frac{m\prime_3}{\sqrt{\widehat{v_3}}+\epsilon}=4.6686-0.1\times\frac{6.0548}{\sqrt{23.3132}+{10}^{-8}}=4.6686-0.1\times\frac{6.0548}{4.8284}=4.6686-0.1\times1.2547=4.6686-0.12547=4.5431 After three iterations, the parameter values have been updated as follows: \ \theta_1=4.81\ \ \theta_2=4.6686\ \ \theta_3=4.5431\

The parameter \theta is gradually moving towards the global minimum at \theta=\ 0 , demonstrating how the Nadam optimizer adjusts the parameters efficiently.

Linear Regression Task We’ll use a simple linear regression task where we predict the price of a house based on its size. We’ll define the hypothesis function, cost function, optimization problem, and provide Python code to implement and visualize Nadam on the dataset. Dataset Consider the following dataset representing the size (in square feet) and price (in thousands of dollars) of several houses: Size (sq ft) 50 60 70 80 90 100 110 120 130 140 150 Price ($K) 150 180 200 240 300 330 360 400 450 500 550

We aim to model the relationship between the size of a house and its price using a linear hypothesis function: f_\theta\left(x\right)=\theta_0+\theta_1x The cost function measures the discrepancy between the predicted prices and the actual prices. We’ll use the Mean Squared Error (MSE) as the cost function: J\left(\theta\right)=\frac{1}{2}\sum_{i}^{n}\left(f_\theta\left(x_i\right)-y_i\right)^2 The goal is to find the optimal parameters \theta=\left(\theta_0,\theta_1\right) that minimize the cost function: \mathrm{argmi}\mathrm{n}_\mathrm{\theta}\frac{1}{n}\sum_{i}^{n}\left(f_\theta\left(x_i\right)-y_i\right)^2 To minimize the cost function using gradient-based optimization, we compute the partial derivatives with respect to \theta_0\ and\ \theta_1: \frac{\partial J\left(\theta\right)}{\partial\theta_0}=\sum_{i=1}^{n}\left(f_\theta\left(x_i\right)-y_i\right) \frac{\partial J\left(\theta\right)}{\partial\theta_1}=\sum_{i=1}^{n}{\left(f_\theta\left(x_i\right)-y_i\right)x_i} Python code




Applications Nadam’s integration of adaptive moment estimation with Nesterov accelerated gradients makes it a powerful tool for both convex and non-convex optimization problems, particularly in training complex neural networks. It has been widely applied in diverse domains: for instance, Nadam has been used to optimize convolutional neural networks (CNNs) in tasks like image denoising [5] and human chromosome classification [6], demonstrating enhanced performance and faster convergence. In the context of time-series prediction, Nadam has been employed to optimize Long Short-Term Memory (LSTM) networks for predicting pressure in coal mining working faces [7]. Furthermore, it has shown effectiveness in text sentiment analysis by training LSTMs, where its combination with L2 regularization achieved higher accuracy in fewer epochs [8]. Nadam’s robustness and adaptability have led to its integration into major deep learning libraries such as TensorFlow [9] and PyTorch [10], making it a readily accessible optimizer for a wide range of machine learning tasks. Conclusion Nadam combines the adaptive learning rate adjustment of Adam with the anticipatory gradient updates of Nesterov momentum, making it a powerful extension of Adam. By incorporating the Nesterov approach, Nadam calculates gradients at a projected future position, which allows for more responsive and accurate updates, improving convergence speed and stability. This makes Nadam a robust tool for optimization tasks, particularly for training deep neural networks in both convex and non-convex settings. While Nadam has been effectively applied in areas like image classification, time-series prediction, and natural language processing, its suitability for specific problem types remains a topic for further exploration and research. Overall, Nadam offers several advantages, including faster convergence, improved handling of noisy gradients, and better generalization compared to traditional methods like SGD, Momentum, or Adam. These strengths make Nadam a versatile and reliable optimizer in modern machine learning applications.

References [1] T. Dozat, “Incorporating Nesterov Momentum into Adam”. [2] J. Brownlee, “Gradient Descent Optimization With Nadam From Scratch,” MachineLearningMastery.com. Accessed: Oct. 27, 2024. [Online]. Available: https://www.machinelearningmastery.com/gradient-descent-optimization-with-nadam-from-scratch/ [3] S. Ruder, “An overview of gradient descent optimization algorithms,” Jun. 15, 2017, arXiv: arXiv:1609.04747. doi: 10.48550/arXiv.1609.04747. [4] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Jan. 30, 2017, arXiv: arXiv:1412.6980. doi: 10.48550/arXiv.1412.6980. [5] L. Fan and Z. Long, “Optimization of Nadam algorithm for image denoising based on convolutional neural network,” in 2020 7th International Conference on Information Science and Control Engineering (ICISCE), Dec. 2020, pp. 957–961. doi: 10.1109/ICISCE50968.2020.00197. [6] D. Menaka and S. G. Vaidyanathan, “Chromenet: a CNN architecture with comparison of optimizers for classification of human chromosome images,” Multidimens. Syst. Signal Process., vol. 33, no. 3, pp. 747–768, Sep. 2022, doi: 10.1007/s11045-022-00819-x. [7] J. Lu, Z. Liu, W. Zhang, J. Zheng, and C. Han, “Pressure Prediction Study of Coal Mining Working Face Based on Nadam-LSTM,” IEEE Access, vol. 11, pp. 83867–83880, 2023, doi: 10.1109/ACCESS.2023.3302516. [8] J. Wang and Z. Cao, “Chinese text sentiment analysis using LSTM network based on L2 and Nadam,” in 2017 IEEE 17th International Conference on Communication Technology (ICCT), Oct. 2017, pp. 1891–1895. doi: 10.1109/ICCT.2017.8359958. [9] “tf.keras.optimizers.Nadam | TensorFlow v2.16.1,” TensorFlow. Accessed: Dec. 03, 2024. [Online]. Available: https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Nadam [10] “NAdam — PyTorch 2.5 documentation.” Accessed: Dec. 03, 2024. [Online]. Available: https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html?utm_source=chatgpt.com