Adamax: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 21: Line 21:
   <math>\epsilon</math>: Small constant to avoid division by zero
   <math>\epsilon</math>: Small constant to avoid division by zero


=== For each time step <math>t</math>: ===
=== For each time step : ===
1. Compute Gradient: <math>g_t = \nabla_{\theta} J(\theta_{t-1})</math>
1. Compute Gradient: <math>g_t = \nabla_{\theta} J(\theta_{t-1})</math>


Line 39: Line 39:
   <math>\hat{m}_t = \frac{m_t}{1 - \beta_1^t}</math>
   <math>\hat{m}_t = \frac{m_t}{1 - \beta_1^t}</math>
   <math>\theta_t = \theta_{t-1} - \alpha \cdot \frac{\hat{m}_t}{u_t + \epsilon}</math>
   <math>\theta_t = \theta_{t-1} - \alpha \cdot \frac{\hat{m}_t}{u_t + \epsilon}</math>
== Numerical Examples ==
== Numerical Examples ==
To illustrate the Adamax optimization algorithm, we will minimize the quadratic function <math>f(x) = x^2</math> with step-by-step calculations.
To illustrate the Adamax optimization algorithm, we will minimize the quadratic function <math>f(x) = x^2</math> with step-by-step calculations.
Line 63: Line 60:
*Initialization: <math>m_0 = 0, u_0 = 0, t = 0</math>   
*Initialization: <math>m_0 = 0, u_0 = 0, t = 0</math>   
=== Step-by-Step Calculations ===
=== Step-by-Step Calculations ===
=== Iteration 1: <math>t = 1</math> ===
==== Iteration 1: ====
==== <math>t = 1</math> ====
*Gradient Calculation: <math>g_1 = 2x_0 = 2 \cdot 2.0 = 4.0</math>   
*Gradient Calculation: <math>g_1 = 2x_0 = 2 \cdot 2.0 = 4.0</math>   
The gradient indicates the steepest direction and magnitude for reducing <math>f(x)</math>. A positive gradient shows <math>x_0</math> must decrease to minimize the function.
The gradient indicates the steepest direction and magnitude for reducing <math>f(x)</math>. A positive gradient shows <math>x_0</math> must decrease to minimize the function.
Line 79: Line 77:
The parameter moves closer to the function's minimum at <math>x = 0</math>.   
The parameter moves closer to the function's minimum at <math>x = 0</math>.   


==== Iteration 2: <math>t = 2</math> ====
==== Iteration 2: ====
==== <math>t = 2</math> ====
*Gradient Calculation :<math>g_2 = 2x_1 = 2 \cdot 1.9 = 3.8</math>
*Gradient Calculation :<math>g_2 = 2x_1 = 2 \cdot 1.9 = 3.8</math>



Revision as of 02:51, 15 December 2024

Author: Chengcong Xu (cx253), Jessica Liu (hl2482), Xiaolin Bu (xb58), Qiaoyue Ye (qy252), Haoru Feng (hf352) (ChemE 6800 Fall 2024)

Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu

Introduction

Adamax is an optimization algorithm introduced by Kingma and Ba in their Adam optimizer paper (2014). It improves upon the Adam algorithm by replacing the second moment's root mean square (RMS) norm with the infinity norm (). This change makes Adamax more robust and numerically stable, especially when handling sparse gradients, noisy updates, or optimization problems with significant gradient variations.

Adamax dynamically adjusts learning rates for individual parameters, making it well-suited for training deep neural networks, large-scale machine learning models, and tasks involving high-dimensional parameter spaces.

Algorithm Discussion

The Adamax optimizer, a variant of the Adam optimizer, adapts the learning rate for each parameter based on the first moment estimate and the infinity norm of past gradients. This approach makes it particularly robust for handling sparse gradients and stable under certain training conditions. By replacing the second moment estimate with the infinity norm, Adamax simplifies the parameter update while retaining the core benefits of adaptive learning rates.

Given the parameters , a learning rate , and decay rates and , Adamax follows these steps:

Initialize:

  • Initialize parameters , the first-moment estimate , and the exponentially weighted infinity norm .
  • Set hyperparameters:
  : Learning rate
  : Exponential decay rate for the first moment
  : Exponential decay rate for the infinity norm
  : Small constant to avoid division by zero

For each time step :

1. Compute Gradient:

2. Update First Moment Estimate:

3. Update Infinity Norm:

4. Bias Correction for the First Moment:

5. Parameter Update:

Pseudocode for Adamax

For to :

 
 
 
 
 

Numerical Examples

To illustrate the Adamax optimization algorithm, we will minimize the quadratic function with step-by-step calculations.

Problem Setup

  • Optimization Objective: Minimize , which reaches its minimum at with .
  • Initial Parameter: Start with .
  • Gradient Formula: , which determines the direction and rate of parameter change.
  • Hyperparameters:

Learning Rate: controls the step size.

First Moment Decay Rate: , determines how past gradients influence the current gradient estimate.

Infinity Norm Decay Rate: , governs the decay of the infinity norm used for scaling updates.

Numerical Stability Constant: , prevents division by zero.

  • Initialization:

Step-by-Step Calculations

Iteration 1:

  • Gradient Calculation:

The gradient indicates the steepest direction and magnitude for reducing . A positive gradient shows must decrease to minimize the function.

  • First Moment Update:

The first moment is a running average of past gradients, smoothing out fluctuations.

  • Infinity Norm Update:

The infinity norm ensures updates are scaled by the largest observed gradient, stabilizing step sizes.

  • Bias-Corrected Learning Rate:

The learning rate is corrected for bias introduced by initialization, ensuring effective parameter updates.

  • Parameter Update:

The parameter moves closer to the function's minimum at .

Iteration 2:

  • Gradient Calculation :
  • First Moment Update:
  • Infinity Norm Update:
  • Bias-Corrected Learning Rate:
  • Parameter Update:

The parameter continues to approach the minimum at .

Summary

Through these two iterations, Adamax effectively adjusts the parameter based on the computed gradients, moving it closer to the minimum. The use of the infinity norm stabilizes the updates, ensuring smooth convergence.

Applications

Natural Language Processing

Adamax is particularly effective in training transformer-based models like BERT and GPT. Its stability with sparse gradients makes it ideal for tasks such as text classification, machine translation, and named entity recognition.

Computer Vision

In computer vision, Adamax optimizes deep CNNs for tasks like image classification and object detection. Its smooth convergence behavior has been observed to enhance performance in models like ResNet and DenseNet.

Reinforcement Learning

Adamax has been applied in training reinforcement learning agents, particularly in environments where gradient updates are inconsistent or noisy, such as robotic control and policy optimization.

Generative Models

For training generative models, including GANs and VAEs, Adamax provides robust optimization, improving stability and output quality during adversarial training.

Time-Series Forecasting

Adamax is used in financial and economic forecasting, where it handles noisy gradients effectively, resulting in stable and accurate time-series predictions.

Advantages over Other Approaches

  • Stability: The use of the infinity norm ensures Adamax handles gradient variations smoothly.
  • Sparse Gradient Handling: Adamax is robust in scenarios with zero or near-zero gradients, common in NLP tasks.
  • Efficiency: Adamax is computationally efficient for high-dimensional optimization problems.

Conclusion

Adamax is a robust and efficient variant of the Adam optimizer that replaces the RMS norm with the infinity norm. Its ability to handle sparse gradients, noisy updates, and large parameter spaces makes it a widely used optimization method in natural language processing, computer vision, reinforcement learning, and generative modeling.

Future advancements may involve integrating Adamax with learning rate schedules and regularization techniques to further enhance its performance.

References