|
|
Line 132: |
Line 132: |
|
| |
|
| <math>\alpha_1 = max(0.001,0.806)\cdot 0.01=0.00806</math> | | <math>\alpha_1 = max(0.001,0.806)\cdot 0.01=0.00806</math> |
| | |
|
| |
|
| '''<big>Step 2: Compute <math>G^{2}_t</math> (Element-wise Square of Gradient)</big>''' | | '''<big>Step 2: Compute <math>G^{2}_t</math> (Element-wise Square of Gradient)</big>''' |
Line 138: |
Line 139: |
|
| |
|
| <math>G^{2}_1 = \begin{bmatrix} 0.3^2&(-0.2)^2&0.4^2\\ (-0.5)^2&0.6^2&(-0.1)^2\\0.2^2&(-0.4)^2 &0.3^2 \end{bmatrix}</math> | | <math>G^{2}_1 = \begin{bmatrix} 0.3^2&(-0.2)^2&0.4^2\\ (-0.5)^2&0.6^2&(-0.1)^2\\0.2^2&(-0.4)^2 &0.3^2 \end{bmatrix}</math> |
| | |
|
| |
|
|
| |
|
Line 172: |
Line 174: |
|
| |
|
| <math>C_1 = \begin{bmatrix} \tfrac{0.09+025+0.04}{3} \\ \tfrac{0.04+0.36+0.16}{3}\\\tfrac{0.16+0.01+0.09}{3} \end{bmatrix} = \begin{bmatrix} 0.1267\\ 0.1867\\0.0867\end{bmatrix} </math> | | <math>C_1 = \begin{bmatrix} \tfrac{0.09+025+0.04}{3} \\ \tfrac{0.04+0.36+0.16}{3}\\\tfrac{0.16+0.01+0.09}{3} \end{bmatrix} = \begin{bmatrix} 0.1267\\ 0.1867\\0.0867\end{bmatrix} </math> |
|
| |
|
| |
|
| |
|
| '''Step 3.3: Second Moment Estimate ('''<math>\hat{V_t}</math>''')''' | | '''Step 3.3: Second Moment Estimate ('''<math>\hat{V_t}</math>''')''' |
Line 186: |
Line 186: |
|
| |
|
| <math>\hat{V}_1 = \begin{bmatrix} 0.0122&0.0180&0.0084\\ 0.0262&0.0386&0.0179\\ 0.0122&0.0180&0.0084\end{bmatrix} </math> | | <math>\hat{V}_1 = \begin{bmatrix} 0.0122&0.0180&0.0084\\ 0.0262&0.0386&0.0179\\ 0.0122&0.0180&0.0084\end{bmatrix} </math> |
|
| |
|
| |
|
| |
|
| '''<big>Step 4: Update the vector (<math>U_t </math>)</big>''' | | '''<big>Step 4: Update the vector (<math>U_t </math>)</big>''' |
Line 206: |
Line 204: |
|
| |
|
| <math>U_1 = \begin{bmatrix} 2.711&-1.489&4.370\\-3.090&3.055&-0.747\\1.807&-2.978&3.278 \end{bmatrix} </math> | | <math>U_1 = \begin{bmatrix} 2.711&-1.489&4.370\\-3.090&3.055&-0.747\\1.807&-2.978&3.278 \end{bmatrix} </math> |
|
| |
|
| |
|
| |
|
| '''step 4.2: Clipped Update Vector <math>\hat{U_t} </math>''' | | '''step 4.2: Clipped Update Vector <math>\hat{U_t} </math>''' |
Line 224: |
Line 220: |
|
| |
|
| '''<math>\hat{U_1} = \begin{bmatrix} 0.965&-0.53&1.556 \\-1.1&1.088&-0.266\\0.664&-1.06&1.167 \end{bmatrix} </math>''' | | '''<math>\hat{U_1} = \begin{bmatrix} 0.965&-0.53&1.556 \\-1.1&1.088&-0.266\\0.664&-1.06&1.167 \end{bmatrix} </math>''' |
| | |
|
| |
|
| '''<big>Step 5: Weight Update (</big>'''<math>X_1 </math>'''<big>)</big>''' | | '''<big>Step 5: Weight Update (</big>'''<math>X_1 </math>'''<big>)</big>''' |
Author: Aolei Cao (ac3237), Ziyang Li (zl986), Junjia Liang (jl4439) (ChemE 6800 Fall 2024)
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
Introduction
Problem formulation
1. Objective
Minimize the loss function , where and is the weight vector to be optimized.
2. Parameters
- Where:
- is the running average of the squared gradient.
- is the corrected decay parameter.
- is a regularization constant.
- Where:
- is the relative step size.
- is a regularization constant.
- is the root mean square, defined as:
3. Algorithms
Adafactor for Weighted Vectors
Inputs:
- Initial point:
- Relative step sizes: for to
- Second moment decay: for to , with
- Regularization constants:
- Clipping threshold:
Algorithm:
- For to :
- Compute adaptive step size:
- Compute gradient:
- Update second moment estimate:
- Compute normalized gradient:
- Apply clipping:
- Update parameter:
- End for
Adafactor for Weighted Matrices
Inputs:
- Initial point:
- Relative step sizes: for to
- Second moment decay: for to , with
- Regularization constants:
- Clipping threshold:
Algorithm:
- For to :
- Compute adaptive step size:
- Compute gradient:
- Update row-wise second moment:
- Update column-wise second moment:
- Update overall second moment estimate:
- Compute normalized gradient:
- Apply clipping:
- Update parameter:
- End for
4. Proposed Hyperparameters for Adafactor
- Regularization constant 1:
- Regularization constant 2:
- Clipping threshold:
- Relative step size:
- Second moment decay:
Numerical Examples
Step-by-step instructions for determining the result of the first iteration.
Problem setup
Initial weights ():
Gradient for first iteration ():
Gradient of the loss function with respect to X
Hyperparameters setup
(Minimum learning rate scaling factor))
(Regularization constant)
(Clipping threshold)
(Relative step size)
(Second moment decay)
Step 1: Learning Rate Scaling
Define the relative step size
Step 1.1: Root Mean Square(RMS) calculation for
Root Mean Square(RMS) calculation for
RMS formula
Substitute the initial weights
Step 1.2: Find the Learning Rate Scaling ():
Learning rate formula
Substitute the RMS
Step 2: Compute (Element-wise Square of Gradient)
Compute the squared value of each element in the gradient matrix .
Step 3: Find the moment estimate
Compute the exponential moving average of squared gradients to capture the variance or scale of gradients.
Step 3.1: Compute row moments ()
This equation computes the row-wise second moments ( ) as an exponential moving average of past moments () and the current row-wise mean of squared gradients ( ), with a balance controlled by ().
For
Since , for first iteration: . And because is too small, we can ignore it. The update of is:
Row-wise mean ():
Step 3.2: Compute column moments ()
The process is same as row moments.
Column-wise mean ():
Step 3.3: Second Moment Estimate ()
The Second Moment Estimate is calculated as the outer product of the row moments () and column moments ().
Step 4: Update the vector ()
Computed by scaling the gradient matrix element-wise with the inverse square root of the second moment estimate ()
step 4.1: Find the vector value of
Formula of
Substitute and
step 4.2: Clipped Update Vector
Scale the update vector ( ) to ensure its RMS value does not exceed a predefined clipping threshold (), maintaining stability in updates.
Formula of
Compute RMS of
Since RMS()>d, scale by
Step 5: Weight Update ()
Adjust the weights () by subtracting the product of the learning rate () and the clipped update vector ( ).
The result for first iteration.
Applications
Conclusion
Reference