Adafactor: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 4: Line 4:


== Introduction ==
== Introduction ==
Adafactor is an efficient, adaptive learning rate optimization algorithm proposed by Noam Shazeer and Mitchell Stern from Google Research in 2018. <sup>1</sup>
Unlike traditional Adam optimizers, Adafactor does not store complete second-order moment matrices. Instead, it employs a factorization approach that only maintains gradient statistics for the rows and columns of parameter matrices, significantly reducing memory usage. Moreover, Adafactor uses an adaptive learning rate, allowing it to dynamically adjust step sizes without the need for manually setting a global learning rate or relying heavily on hyperparameter tuning. Its design also defaults to not performing bias correction, yet it remains stable in scenarios involving large-batch training data.[1] This efficiency makes it an ideal choice for training ultra-large-scale models such as T5.<sup>2</sup>
Adafactor’s efficient memory usage and outstanding performance make it widely applicable in scenarios such as Natural Language Processing (NLP). Compared to the Adam optimizer, Adafactor significantly reduces memory and computational resource requirements while maintaining comparable performance when training large-scale language models and vision models. <sup>3,6</sup>
== Problem formulation ==
== Problem formulation ==
=== 1. Objective ===
=== 1. Objective ===

Revision as of 19:49, 11 December 2024

Author: Aolei Cao (ac3237), Ziyang Li (zl986), Junjia Liang (jl4439) (ChemE 6800 Fall 2024)

Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu

Introduction

Adafactor is an efficient, adaptive learning rate optimization algorithm proposed by Noam Shazeer and Mitchell Stern from Google Research in 2018. 1

Unlike traditional Adam optimizers, Adafactor does not store complete second-order moment matrices. Instead, it employs a factorization approach that only maintains gradient statistics for the rows and columns of parameter matrices, significantly reducing memory usage. Moreover, Adafactor uses an adaptive learning rate, allowing it to dynamically adjust step sizes without the need for manually setting a global learning rate or relying heavily on hyperparameter tuning. Its design also defaults to not performing bias correction, yet it remains stable in scenarios involving large-batch training data.[1] This efficiency makes it an ideal choice for training ultra-large-scale models such as T5.2

Adafactor’s efficient memory usage and outstanding performance make it widely applicable in scenarios such as Natural Language Processing (NLP). Compared to the Adam optimizer, Adafactor significantly reduces memory and computational resource requirements while maintaining comparable performance when training large-scale language models and vision models. 3,6

Problem formulation

1. Objective

Minimize the loss function , where and is the weight vector to be optimized.

2. Parameters

  • Gradient:

  • Second moment estimate:

  • Where:
    • is the running average of the squared gradient.
    • is the corrected decay parameter.
    • is a regularization constant.
  • Step size:

  • Where:
    • is the relative step size.
    • is a regularization constant.
    • is the root mean square, defined as:

3. Algorithms

Adafactor for Weighted Vectors

Inputs:

  • Initial point:
  • Relative step sizes: for to
  • Second moment decay: for to , with
  • Regularization constants:
  • Clipping threshold:

Algorithm:

  • For to :
    • Compute adaptive step size:
    • Compute gradient:
    • Update second moment estimate:
    • Compute normalized gradient:
    • Apply clipping:
    • Update parameter:
  • End for

Adafactor for Weighted Matrices

Inputs:

  • Initial point:
  • Relative step sizes: for to
  • Second moment decay: for to , with
  • Regularization constants:
  • Clipping threshold:

Algorithm:

  • For to :
    • Compute adaptive step size:
    • Compute gradient:
    • Update row-wise second moment:
    • Update column-wise second moment:
    • Update overall second moment estimate:
    • Compute normalized gradient:
    • Apply clipping:
    • Update parameter:
  • End for

4. Proposed Hyperparameters for Adafactor

  • Regularization constant 1:
  • Regularization constant 2:
  • Clipping threshold:
  • Relative step size:
  • Second moment decay:

Numerical Examples

Step-by-step instructions for determining the result of the first iteration.

Problem setup

Initial weights ():

Gradient for first iteration (​):

Gradient of the loss function with respect to X

Hyperparameters setup

(Minimum learning rate scaling factor))

(Regularization constant)

(Clipping threshold)

(Relative step size)

(Second moment decay)

Step 1: Learning Rate Scaling

Define the relative step size

Step 1.1: Root Mean Square(RMS) calculation for

Root Mean Square(RMS) calculation for

RMS formula

Substitute the initial weights

Step 1.2: Find the Learning Rate Scaling ():

Learning rate formula

Substitute the RMS


Step 2: Compute ​ (Element-wise Square of Gradient)

Compute the squared value of each element in the gradient matrix .



Step 3: Find the moment estimate

Compute the exponential moving average of squared gradients to capture the variance or scale of gradients.

Step 3.1: Compute row moments ()

This equation computes the row-wise second moments ( ​) as an exponential moving average of past moments () and the current row-wise mean of squared gradients ( ​ ), with a balance controlled by ().

For

Since , for first iteration: . And because is too small, we can ignore it. The update of is:

Row-wise mean ():

Step 3.2: Compute column moments ()

The process is same as row moments.

Column-wise mean ():

Step 3.3: Second Moment Estimate ()

The Second Moment Estimate is calculated as the outer product of the row moments (​) and column moments (​).


Step 4: Update the vector ()

Computed by scaling the gradient matrix ​ element-wise with the inverse square root of the second moment estimate (​)

step 4.1: Find the vector value of

Formula of

Substitute and


step 4.2: Clipped Update Vector

Scale the update vector ( ​ ) to ensure its RMS value does not exceed a predefined clipping threshold (), maintaining stability in updates.

Formula of

Compute RMS of

Since RMS(​)>d, scale ​ by


Step 5: Weight Update ()

Adjust the weights () by subtracting the product of the learning rate () and the clipped update vector ( ).

The result for first iteration.




Applications

Conclusion

Reference