Quasi-Newton methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
m (Add introduction)
Line 3: Line 3:
Steward: Allen Yang, Fengqi You
Steward: Allen Yang, Fengqi You


Quasi-Newton Methods are a kind of methods that used to solve nonlinear optimization problems. They are based on Newton’s method yet can be an alternative of Newton’s method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.


Quasi-Newton Methods are a kind of methods that used to solve nonlinear optimization problems. They are based on Newton’s method yet can be an alternative of Newton’s method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.
== Introduction ==
The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.  


=== 1. Introduction ===
During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963) and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).


* The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods. During the following years, numerous variants were proposed, include Broyden’s method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963) and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).
In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function f to find the roots of the first derivative (solutions to <math>f'(x)=0</math>), also known as the stationary points of <math>f</math>.  Quasi-Newton methods are similar to the Newton's method but with one key idea that is different, they don't calculate the Hessian matrix, they introduce a matrix '''<math>B</math>''' to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of Hessian matrix and its inverse. And there are many variants of quasi-Newton methods simply depend on the exact methods they use in the estimation of Hessian matrix.
* In optimization problems, Newton’s method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function f to find the roots of the first derivative (solutions to f’(x)=0), also known as the stationary points of f.  Quasi-Newton methods are similar to the Newton’s method but with one key idea that is different, they don’t calculate the Hessian matrix, they introduce a matrix B to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of Hessian matrix and its inverse. And there are many variants of quasi-Newton methods simply depend on the exact methods they use in estimation of Hessian matrix.


=== 2.  Algorithm and Theory ===
== Theory and Algorithm ==




=== 3. Numerical Example ===
== Numerical Example ==
<math>\begin{align} f(x_1, x_2) & = (x_1)^2 +\frac{1}{2}(x_2)^2+3\end{align}
<math>\begin{align} f(x_1, x_2) & = x_1^2 +\frac{1}{2}x_2^2+3\end{align}




</math>
</math>

Revision as of 18:24, 20 November 2020

Author: Jianmin Su (ChemE 6800 Fall 2020)

Steward: Allen Yang, Fengqi You

Quasi-Newton Methods are a kind of methods that used to solve nonlinear optimization problems. They are based on Newton’s method yet can be an alternative of Newton’s method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.

Introduction

The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.

During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963) and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).

In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function f to find the roots of the first derivative (solutions to ), also known as the stationary points of . Quasi-Newton methods are similar to the Newton's method but with one key idea that is different, they don't calculate the Hessian matrix, they introduce a matrix to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of Hessian matrix and its inverse. And there are many variants of quasi-Newton methods simply depend on the exact methods they use in the estimation of Hessian matrix.

Theory and Algorithm

Numerical Example