AdamW

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Author: Yufeng Hao (yh2295), Zhengdao Tang (zt278), Yixiao Tian (yt669), Yijie Zhang (yz3384), Zheng Zhou (zz875) (ChemE 6800 Fall 2024)

Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu

Introduction

AdamW is an influential optimization algorithm in deep learning, developed as a modification to the Adam optimizer to decouple weight decay from gradient-based updates[1]. This decoupling was introduced to address overfitting issues that often arise when using standard Adam, especially for large-scale neural network models.

By applying weight decay separately from the adaptive updates of parameters, AdamW achieves more effective regularization while retaining Adam’s strengths, such as adaptive learning rates and computational efficiency. This characteristic enables AdamW to achieve superior convergence and generalization compared to its predecessor, making it particularly advantageous for complex tasks involving large transformer-based architectures like BERT and GPT [2][3].

As deep learning models grow in scale and complexity, AdamW has become a preferred optimizer due to its robust and stable convergence properties. Research has shown that AdamW can yield improved validation accuracy, faster convergence, and better generalization compared to both standard Adam and stochastic gradient descent (SGD) with momentum, especially in large-scale applications[1] [2] [4].

Algorithm Discussion

The standard Adam optimizer integrates weight decay by adding a term proportional to the parameters directly to the gradient, effectively acting as an L2 regularization term. This approach can interfere with Adam’s adaptive learning rates, leading to suboptimal convergence characteristics[1].

AdamW addresses this shortcoming by decoupling the weight decay step from the gradient-based parameter updates. Weight decay is applied after the parameter update is performed, preserving the integrity of the adaptive learning rate mechanism while maintaining effective regularization. This decoupling leads to more stable and predictable training dynamics, which is critical for large-scale models prone to overfitting[1].

Algorithm Steps

Given the parameters , a learning rate , and weight decay , AdamW follows these steps:

  • Initialize:
    • Initialize parameters , the first-moment estimate , and the second-moment estimate .
    • Set hyperparameters:
      • : learning rate
      • : exponential decay rate for the first moment
      • : exponential decay rate for the second moment
      • : small constant to avoid division by zero
  • For each time step :
    • Compute Gradient:
      • Calculate the gradient of the objective function:
    • Update First Moment Estimate:
      • Update the exponentially decaying average of past gradients:
    • Update Second Moment Estimate:
      • Update the exponentially decaying average of squared gradients (element-wise square): where denotes element-wise multiplication of with itself.
    • Bias Correction:
      • Compute bias-corrected first and second moment estimates:
    • Parameter Update with Weight Decay:
      • Update parameters with weight decay applied separately from the gradient step:
      • This form highlights that weight decay is applied as a separate additive term to the parameter update, reinforcing the decoupling concept.

Pseudocode for AdamW

Initialize , ,

Set hyperparameters: , , , ,

For to :

  # Compute gradient




  # Update parameters without weight decay
  # Apply decoupled weight decay

Numerical Examples

To demonstrate the functionality of the AdamW algorithm, a straightforward numerical example is presented. This example utilizes small dimensions and simplified values to clearly illustrate the key calculations and steps involved in the algorithm.

Example Setup

Consider the following:

  • Initial parameter:
  • Learning rate:
  • Weight decay:
  • First-moment decay rate:
  • Second-moment decay rate:
  • Small constant:
  • Objective function gradient:

For this example, assume we have a simple quadratic function: The gradient of this function is:

Step-by-Step Calculation

Initialization

  • First moment estimate:
  • Second moment estimate:
  • Initial parameter:

Iteration 1

  • Step 1: Compute Gradient:
  • Step 2: Update First Moment Estimate:
  • Step 3: Update Second Moment Estimate:
  • Step 4: Bias Correction for First Moment:
  • Step 5: Bias Correction for Second Moment:
  • Step 6: Parameter Update with Weight Decay:
    • Gradient Update:
    • Simplify the denominator:
    • Compute the update:
    • Weight Decay:
    • Updated Parameter:

Iteration 2

  • Step 1: Compute Gradient:
  • Step 2: Update First Moment Estimate:
  • Step 3: Update Second Moment Estimate:
  • Step 4: Bias Correction for First Moment:
  • Step 5: Bias Correction for Second Moment:
  • Step 6: Parameter Update with Weight Decay:
    • Gradient Update:
    • Simplify the denominator:
    • Compute the update:
    • Weight Decay:
    • Updated Parameter:

Explanations for Each Step

  • Step 1: The gradient is calculated based on the current parameter value. For the function , the gradient represents the slope of the function at .
  • Steps 2 and 3: The first and second moment estimates ( and ) are updated using exponentially decaying averages of past gradients and squared gradients, respectively. These updates help the optimizer adjust the learning rate dynamically for each parameter, improving efficiency.
  • Steps 4 and 5: Bias correction is applied to the moment estimates to address their initial bias toward zero. This correction is particularly important during the early stages of optimization, ensuring more accurate estimates.
  • Step 6: The parameter is updated in two key parts:
    • Gradient Update: The parameter is adjusted in the opposite direction of the gradient. This adjustment is scaled by the learning rate and adapted using the corrected moment estimates.
    • Weight Decay: A regularization term is applied by reducing the parameter's value slightly. This encourages smaller parameter values, which helps to prevent overfitting.

By repeatedly performing these steps, the AdamW optimizer effectively moves the parameters closer to the function's minimum while controlling overfitting through the use of decoupled weight decay.

Application

Areas of Application

AdamW is commonly used to optimize large-scale deep learning models in areas such as natural language processing (NLP), computer vision, reinforcement learning, and generative modeling[2][3][4].

Natural Language Processing (NLP)

AdamW has been effectively employed in training large-scale transformer models like BERT and GPT. For BERT, improved downstream performance on NLP benchmarks has been reported compared to earlier optimizers [2]. Similarly, GPT-3’s training benefited from AdamW-like optimization for stable and efficient training [3].

Computer Vision

Vision Transformers (ViT) utilize AdamW to achieve state-of-the-art results in image classification tasks. Training with AdamW improved top-1 accuracy on ImageNet compared to traditional optimizers, contributing to the success of ViT models [4].

Reinforcement Learning

AdamW has been used in reinforcement learning scenarios where stable policy convergence is important. Empirical findings have demonstrated that AdamW leads to more predictable and stable training dynamics than standard Adam[1].

Generative Models

Generative models, including variants of GANs and VAEs, benefit from AdamW’s improved regularization properties. Evaluations have indicated that AdamW can result in more stable training and improved generative quality [1].

Time-Series Forecasting and Finance

Financial applications, such as stock price prediction, have employed AdamW to enhance training stability and predictive performance of deep learning models. Empirical studies have reported lower validation errors and reduced overfitting when using AdamW compared to standard Adam[5].

Advantages over Other Approaches

Quantitative studies have supported the superiority of AdamW over traditional Adam and other optimizers. The original AdamW paper demonstrated improved test accuracy and more stable validation losses[1][2]. reported that AdamW contributed to BERT’s superior performance on the GLUE benchmark, and Dosovitskiy et al[4].showed that ViT models trained with AdamW achieved higher accuracy than models trained with classical optimizers like SGD with momentum.

Conclusion

AdamW is a highly effective optimization algorithm for training large-scale deep learning models. Its key innovation—decoupling weight decay from gradient-based parameter updates—preserves the adaptive learning rate mechanism, leading to improved generalization and stable convergence [1]. These properties make AdamW well-suited for modern architectures, including transformer-based models in NLP and computer vision, as well as for applications in reinforcement learning, generative modeling, and time-series forecasting [2] [4] [5].

As deep learning continues to evolve, AdamW is likely to remain a critical tool. Future work may involve integrating AdamW with learning rate schedules, second-order optimization techniques, or further algorithmic refinements to improve efficiency and robustness under varied and challenging training conditions.

Reference

[3] [5] [2] [4] [1]

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Loshchilov, I., & Hutter, F. (2017). Decoupled Weight Decay Regularization. arXiv preprint arXiv:1711.05101. https://arxiv.org/abs/1711.05101.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). https://arxiv.org/abs/1810.04805.
  3. 3.0 3.1 3.2 3.3 Brown, T. B., Mann, B., Ryder, N., Subbiah, M., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems (NeurIPS). https://arxiv.org/abs/2005.14165.
  4. 4.0 4.1 4.2 4.3 4.4 4.5 Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. International Conference on Learning Representations (ICLR).https://arxiv.org/abs/2010.11929.
  5. 5.0 5.1 5.2 Chen, X., Zhan, Y., Wu, W., Yang, Y., & Yang, Y. (2021). Improving Stock Movement Prediction with Adversarial Training and AdamW. IEEE Access, 9, 25842–25850. https://doi.org/10.1109/ACCESS.2021.3057083.