Quasi-Newton methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
(formula)
Line 3: Line 3:
Steward: Allen Yang, Fengqi You
Steward: Allen Yang, Fengqi You


Quasi-Newton Methods are a kind of methods used to solve nonlinear optimization problems. They are based on Newton’s method yet can be an alternative of Newton’s method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.
Quasi-Newton Methods are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative of Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.


== Introduction ==
== Introduction ==
The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.  
The first quasi-Newton algorithm was developed by [[wikipedia:William_C._Davidon|W.C. Davidon]] in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.  


During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963), and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).
During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963), and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).
Line 12: Line 12:
In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function <math>f</math> to find the roots of the first derivative (solutions to <math>f'(x)=0</math>), also known as the stationary points of <math>f</math>.  
In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function <math>f</math> to find the roots of the first derivative (solutions to <math>f'(x)=0</math>), also known as the stationary points of <math>f</math>.  


The iteration of Newton's method is usually written as: <math>x_{k+1}=x_k-H^{-1}\cdot\bigtriangledown f(x_k) </math>, where<math>k  </math> is the iteration number, <math>H</math>is the Hessian matrix and <math>H=[\bigtriangledown ^2 f(x_k)]</math>
The iteration of Newton's method is usually written as: <math>x_{k+1}=x_k-H^{-1}\cdot\bigtriangledown f(x_k) </math>, where <math>k  </math> is the iteration number, <math>H</math> is the Hessian matrix and <math>H=[\bigtriangledown ^2 f(x_k)]</math>


Iteraton would stop when it satisfies the convergence criteria like <math>{df \over dx}=0, ||\bigtriangledown f(x)||<\epsilon \text{ or } |f(x_{k+1})-f(x_k)|<\epsilon </math>
Iteraton would stop when it satisfies the convergence criteria like <math>{df \over dx}=0, ||\bigtriangledown f(x)||<\epsilon \text{ or } |f(x_{k+1})-f(x_k)|<\epsilon </math>
Though we can solve an optimization problem quickly with Newton's method, it has two obvious disadvantages:
# The objective function must be twice-differentiable and the Hessian matrix must be positive definite.
# The calculation is costly because it requires to compute the Jacobian matrix, Hessian matrix and its inverse, which is time-consuming when dealing with a large-scale optimization problem.
However, we can use Quasi-Newton methods to avoid these two disadvantages.·




Line 20: Line 27:


== Theory and Algorithm ==
== Theory and Algorithm ==
formulas that will use
To illustrate the basic idea behind quasi-Newton methods, we start with building a quadratic model of the objective function at the current iterate  <math>x_k</math>:
 
<math>m_k(p)=f_k+\bigtriangledown f_k^Tp+\frac{1}{2}p^TB_kp</math>  (1.1), where <math>B_k </math> is an <math>n\times n </math> symmetric positive definite matrix that will be updated at every iteration.


the objective function at current iterate <math>x_k</math> B_k
The minimizer of this convex quadratic model is:


<math>m_k(p)=f_k+\bigtriangledown f_k^Tp+\frac{1}{2}p^TB_kp</math>
<math>p_k=-B_k^{-1}\bigtriangledown f_k </math> (1.2), which is also used as the search direction.


the objective function at iterate <math>x_k+1</math> to update B_k+1
Then the new iterate could be written as: <math> x_{k+1}=x_{k}+\alpha _kp_k</math> (1.3),


<math>m_{k+1}(p)=f_{k+1}+\bigtriangledown f_{k+1}^Tp+\frac{1}{2}p^TB_{k+1}p</math>
where <math>\alpha _k
</math> is the step length that should satisfy the Wolfe conditions. The iteration is similar to Newton's method, but we use the approximate Hessian <math>B_{k}</math> instead of the true Hessian.


<math>B_{k+1}\alpha _k p_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k </math>
To maintain the curve information we got from the previous iteration in <math>B_{k+1}</math>, we generate a new iterate <math>x_{k+1}</math> and new quadratic modelto in the form of:


Define <math> s_k=x_{k+1}-x_k</math>, <math> y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math>
<math>m_{k+1}(p)=f_{k+1}+\bigtriangledown f_{k+1}^Tp+\frac{1}{2}p^TB_{k+1}p</math> (1.4).


So that
To construct the relationship between 1.1 and 1.4, we require that in 1.1 at <math>p=0</math> the function value and gradient match <math>f_k</math> and <math>\bigtriangledown f_k</math>, and the gradient of <math>m_{k+1}</math>should match the gradient of the objective function at the latest two iterates <math>x_k</math>and <math>x_{k+1}</math>, then we can get:
<math>B_{k+1}s_k=y_k </math>, which is secant equation, can maintain
<math>s_k^TB_{k+1}s_k>0</math>, along with Wolfe conditions to maintain is minimizing the objective function.


To further prove <math>B_{k+1}</math> is positive definite , converge
<math>\bigtriangledown m_{k+1}(-\alpha _kp_k)=\bigtriangledown f_{k+1}-\alpha _kB_{k+1}p_k=\bigtriangledown f_k </math>  (1.5)
 
and with some arrangements:
 
<math>B_{k+1}\alpha _k p_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k </math>  (1.6)
 
Define <math> s_k=x_{k+1}-x_k</math>, <math> y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math> (1.7)
 
So that 1.6 becomes: <math>B_{k+1}s_k=y_k </math>, which is the '''secant equation.'''
 
To make sure <math>B_{k+1}</math> is still a symmetric positive definite matrix, we need <math>s_k^Ts_k>0</math>.
 
To further preserve properties of <math>B_{k+1}</math>  and determine <math>B_{k+1}</math> uniquely,


<math>B_{k+1}=\underset{B}{min}||B-B_k|| </math>
<math>B_{k+1}=\underset{B}{min}||B-B_k|| </math>
Line 58: Line 78:


<math>B_k H_k </math>
<math>B_k H_k </math>





Revision as of 04:31, 21 November 2020

Author: Jianmin Su (ChemE 6800 Fall 2020)

Steward: Allen Yang, Fengqi You

Quasi-Newton Methods are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative of Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.

Introduction

The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.

During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963), and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).

In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function to find the roots of the first derivative (solutions to ), also known as the stationary points of .

The iteration of Newton's method is usually written as: , where is the iteration number, is the Hessian matrix and

Iteraton would stop when it satisfies the convergence criteria like

Though we can solve an optimization problem quickly with Newton's method, it has two obvious disadvantages:

  1. The objective function must be twice-differentiable and the Hessian matrix must be positive definite.
  2. The calculation is costly because it requires to compute the Jacobian matrix, Hessian matrix and its inverse, which is time-consuming when dealing with a large-scale optimization problem.

However, we can use Quasi-Newton methods to avoid these two disadvantages.·


Quasi-Newton methods are similar to Newton's method but with one key idea that is different, they don't calculate the Hessian matrix, they introduce a matrix to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of Hessian matrix and its inverse. And there are many variants of quasi-Newton methods that simply depend on the exact methods they use in the estimation of the Hessian matrix.

Theory and Algorithm

To illustrate the basic idea behind quasi-Newton methods, we start with building a quadratic model of the objective function at the current iterate :

(1.1), where is an symmetric positive definite matrix that will be updated at every iteration.

The minimizer of this convex quadratic model is:

(1.2), which is also used as the search direction.

Then the new iterate could be written as: (1.3),

where is the step length that should satisfy the Wolfe conditions. The iteration is similar to Newton's method, but we use the approximate Hessian instead of the true Hessian.

To maintain the curve information we got from the previous iteration in , we generate a new iterate and new quadratic modelto in the form of:

(1.4).

To construct the relationship between 1.1 and 1.4, we require that in 1.1 at the function value and gradient match and , and the gradient of should match the gradient of the objective function at the latest two iterates and , then we can get:

(1.5)

and with some arrangements:

(1.6)

Define , (1.7)

So that 1.6 becomes: , which is the secant equation.

To make sure is still a symmetric positive definite matrix, we need .

To further preserve properties of and determine uniquely,

Using different norms will lead to different methods that used to update

, the right is Frobenius norm, W is the mean of Hessian matrix

Set with Sherman-Morrison formaula, we can get

In the DFP method, we use to estimate the inverse of Hessian matrix

In the BFGS method, we use to estimate the Hessian matrix


DFP Algorithm

  1. Given the starting point ; convergence tolerance ; the initial estimation of inverse Hessian matrix ; .
  2. Compute the search direction .
  3. Compute the step length with , and then set, then
  4. If , then end of the iteration, otherwise continue step5.
  5. Computing .
  6. Update the with
  7. Update with and go back to step2.

BFGS Algorithm

  1. Given the starting point ; convergence tolerance ; the initial estimation of Hessian matrix ; .
  2. Compute the search direction .
  3. Compute the step length with , and then set, then
  4. If , then end of the iteration, otherwise continue step5.
  5. Computing .
  6. Update the with
  7. Update with and go back to step2.

Numerical Example

Application

Conclusion

References