Difference between revisions of "Quasi-Newton methods"

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
(DFP algorithm)
(BFGS Algorithm)
Line 16: Line 16:
 
=== DFP Algorithm ===
 
=== DFP Algorithm ===
  
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>; the initial estimation of inverse Hessian matrix <math>D_0=I</math>; k=0.
+
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>; the initial estimation of inverse Hessian matrix <math>D_0=I</math>; <math>k=0</math>.
 
# Compute the search direction <math>d_k=-D_k\cdot g_k</math>.
 
# Compute the search direction <math>d_k=-D_k\cdot g_k</math>.
# Compute the step length <math>\lambda_k</math> with <math>\lambda=\arg \underset{min}{\lambda}f(x_k+\lambda d_k), \lambda \in \mathbb{R}
+
# Compute the step length <math>\lambda_k</math> with <math>\lambda=\arg \underset{\lambda \in \mathbb{R}}{min}f(x_k+\lambda d_k),
 +
  
</math>,    and then set<math>s_k={\lambda}_k\cdot d_k</math>, then <math>x_{k+1}=x_k+s_k</math>
+
</math>,    and then set<math>s_k={\lambda}_k d_k</math>, then <math>x_{k+1}=x_k+s_k</math>
 
# If <math>||g_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
 
# If <math>||g_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
 
# Computing <math>y_k=g_{k+1}-g_k</math>.
 
# Computing <math>y_k=g_{k+1}-g_k</math>.
# Update the <math>D_{k+1}</math> with<math>D_{k+1}=D_k+\frac{s_k (s_k)^T}{(s_k)^T y_k}-\frac{D_k y_k (y_k)^T D_k}{(y_k)^T D_k y_k} </math>
+
# Update the <math>D_{k+1}</math> with<math>D_{k+1}=D_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{D_k y_k y_k^T D_k}{y_k^T D_k y_k} </math>
# Update k with k=k+1 and go back to step2.  
+
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.
  
 
=== BFGS Algorithm ===
 
=== BFGS Algorithm ===
  
# <br />
+
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>;  the initial estimation of Hessian matrix <math>B_0=I</math>;  <math>k=0</math>.
 +
# Compute the search direction <math>d_k=-B_k^{-1}\cdot g_k</math>.
 +
# Compute the step length <math>\lambda_k</math> with <math>\lambda=\arg \underset{\lambda \in \mathbb{R}}{min}f(x_k+\lambda d_k), 
 +
 
 +
 
 +
</math>,    and then set<math>s_k={\lambda}_k d_k</math>, then <math>x_{k+1}=x_k+s_k</math>
 +
# If <math>||g_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
 +
# Computing <math>y_k=g_{k+1}-g_k</math>.
 +
# Update the <math>B_{k+1}</math> with<math>B_{k+1}=B_k+\frac{y_k y_k^T}{y_k^T s_k}-\frac{B_k s_k s_k^T B_k}{(s_k)^T B_k s_k} </math>
 +
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.<br />
 +
 
 
== Numerical Example ==
 
== Numerical Example ==
 
<math>\begin{align} f(x_1, x_2) & = x_1^2 +\frac{1}{2}x_2^2+3\end{align}
 
<math>\begin{align} f(x_1, x_2) & = x_1^2 +\frac{1}{2}x_2^2+3\end{align}

Revision as of 23:42, 20 November 2020

Author: Jianmin Su (ChemE 6800 Fall 2020)

Steward: Allen Yang, Fengqi You

Quasi-Newton Methods are a kind of methods that used to solve nonlinear optimization problems. They are based on Newton’s method yet can be an alternative of Newton’s method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.

Introduction

The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.

During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963) and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).

In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function f to find the roots of the first derivative (solutions to ), also known as the stationary points of . Quasi-Newton methods are similar to the Newton's method but with one key idea that is different, they don't calculate the Hessian matrix, they introduce a matrix to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of Hessian matrix and its inverse. And there are many variants of quasi-Newton methods simply depend on the exact methods they use in the estimation of Hessian matrix.

Theory and Algorithm

DFP Algorithm

  1. Given the starting point ; convergence tolerance ; the initial estimation of inverse Hessian matrix ; .
  2. Compute the search direction .
  3. Compute the step length with , and then set, then
  4. If , then end of the iteration, otherwise continue step5.
  5. Computing .
  6. Update the with
  7. Update with and go back to step2.

BFGS Algorithm

  1. Given the starting point ; convergence tolerance ; the initial estimation of Hessian matrix ; .
  2. Compute the search direction .
  3. Compute the step length with , and then set, then
  4. If , then end of the iteration, otherwise continue step5.
  5. Computing .
  6. Update the with
  7. Update with and go back to step2.

Numerical Example