Quasi-Newton methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(42 intermediate revisions by one other user not shown)
Line 1: Line 1:
Author: Jianmin Su (ChemE 6800 Fall 2020)
Author: Jianmin Su (ChemE 6800 Fall 2020)


Steward: Allen Yang, Fengqi You
'''Quasi-Newton Methods''' are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative to Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.
 
Quasi-Newton Methods are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative of Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.


== Introduction ==
== Introduction ==
The first quasi-Newton algorithm was developed by [[wikipedia:William_C._Davidon|W.C. Davidon]] in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performances of computers at that time, thus he managed to build the quasi-Newton method to solve it. Later then, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.  
The first quasi-Newton algorithm was developed by [[Wikipedia: William_C._Davidon|W.C. Davidon]] in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performance of computers. Thus he managed to build the quasi-Newton method to solve it. Later, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.  


During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963), and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970).
During the following years, numerous variants were proposed, include '''[https://en.wikipedia.org/wiki/Broyden%27s_method Broyden's method]''' (1965), the '''[https://en.wikipedia.org/wiki/Symmetric_rank-one SR1 formula]''' (Davidon 1959, Broyden 1967), the '''[https://en.wikipedia.org/wiki/Davidon%E2%80%93Fletcher%E2%80%93Powell_formula DFP method]''' (Davidon, 1959; Fletcher and Powell, 1963), and the '''[https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm BFGS method]''' (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970)<ref>Hennig, Philipp, and Martin Kiefel. "Quasi-Newton method: A new direction." Journal of Machine Learning Research 14.Mar (2013): 843-865.</ref>.


In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function <math>f</math> to find the roots of the first derivative (solutions to <math>f'(x)=0</math>), also known as the stationary points of <math>f</math>.  
In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function <math>f(x)</math> to find the roots of the first derivative (solutions to <math>f'(x)=0</math>), also known as the stationary points of <math>f(x)</math><ref>''Newton’s Method'', 8.Dec (2020)‎. Retrieved from: https://en.wikipedia.org/wiki/Quasi-Newton_method</ref>.  


The iteration of Newton's method is usually written as: <math>x_{k+1}=x_k-H^{-1}\cdot\bigtriangledown f(x_k) </math>, where <math>k  </math> is the iteration number, <math>H</math> is the Hessian matrix and <math>H=[\bigtriangledown ^2 f(x_k)]</math>
The iteration of Newton's method is usually written as: <math>x_{k+1}=x_k-H^{-1}\cdot\bigtriangledown f(x_k) </math>, where <math>k  </math> is the iteration number, <math>H</math> is the Hessian matrix and <math>H=[\bigtriangledown ^2 f(x_k)]</math>
Line 18: Line 16:
Though we can solve an optimization problem quickly with Newton's method, it has two obvious disadvantages:
Though we can solve an optimization problem quickly with Newton's method, it has two obvious disadvantages:


# The objective function must be twice-differentiable and the Hessian matrix must be positive definite.
# The objective function must be twice-differentiable, and the Hessian matrix must be positive definite.
# The calculation is costly because it requires to compute the Jacobian matrix, Hessian matrix and its inverse, which is time-consuming when dealing with a large-scale optimization problem.  
# The calculation is costly because it requires to compute the Jacobian matrix, Hessian matrix, and its inverse, which is time-consuming when dealing with a large-scale optimization problem.  


However, we can use Quasi-Newton methods to avoid these two disadvantages.·
However, we can use Quasi-Newton methods to avoid these two disadvantages.


 
Quasi-Newton methods are similar to Newton's method, but with one key idea that is different, they don't calculate the Hessian matrix. They introduce a matrix '''<math>B</math>''' to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of the Hessian matrix and its inverse. And there are many variants of quasi-Newton methods that simply depend on the exact methods they use to estimate the Hessian matrix.
Quasi-Newton methods are similar to Newton's method but with one key idea that is different, they don't calculate the Hessian matrix, they introduce a matrix '''<math>B</math>''' to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of Hessian matrix and its inverse. And there are many variants of quasi-Newton methods that simply depend on the exact methods they use in the estimation of the Hessian matrix.


== Theory and Algorithm ==
== Theory and Algorithm ==
To illustrate the basic idea behind quasi-Newton methods, we start with building a quadratic model of the objective function at the current iterate  <math>x_k</math>:
To illustrate the basic idea behind quasi-Newton methods, we start with building a quadratic model of the objective function at the current iterate  <math>x_k</math>:


<math>m_k(p)=f_k+\bigtriangledown f_k^Tp+\frac{1}{2}p^TB_kp</math>   (1.1), where <math>B_k </math> is an <math>n\times n </math> symmetric positive definite matrix that will be updated at every iteration.
<math>m_k(p)=f_k(p)+\bigtriangledown f_k^T(p)+\frac{1}{2}p^TB_kp</math> (1.1),  
 
where <math>B_k </math> is an <math>n\times n </math> symmetric positive definite matrix that will be updated at every iteration.


The minimizer of this convex quadratic model is:
The minimizer of this convex quadratic model is:


<math>p_k=-B_k^{-1}\bigtriangledown f_k </math> (1.2), which is also used as the search direction.
<math>p_k=-B_k^{-1}\bigtriangledown f_k </math> (1.2),  
 
which is also used as the search direction.


Then the new iterate could be written as: <math> x_{k+1}=x_{k}+\alpha _kp_k</math> (1.3),  
Then the new iterate could be written as: <math> x_{k+1}=x_{k}+\alpha _kp_k</math> (1.3),  


where <math>\alpha _k
where <math>\alpha _k
Line 42: Line 43:
To maintain the curve information we got from the previous iteration in <math>B_{k+1}</math>, we generate a new iterate <math>x_{k+1}</math> and new quadratic modelto in the form of:
To maintain the curve information we got from the previous iteration in <math>B_{k+1}</math>, we generate a new iterate <math>x_{k+1}</math> and new quadratic modelto in the form of:


<math>m_{k+1}(p)=f_{k+1}+\bigtriangledown f_{k+1}^Tp+\frac{1}{2}p^TB_{k+1}p</math> (1.4).
<math>m_{k+1}(p)=f_{k+1}+\bigtriangledown f_{k+1}^Tp+\frac{1}{2}p^TB_{k+1}p</math> (1.4).


To construct the relationship between 1.1 and 1.4, we require that in 1.1 at <math>p=0</math> the function value and gradient match <math>f_k</math> and <math>\bigtriangledown f_k</math>, and the gradient of <math>m_{k+1}</math>should match the gradient of the objective function at the latest two iterates <math>x_k</math>and <math>x_{k+1}</math>, then we can get:
To construct the relationship between 1.1 and 1.4, we require that in 1.1 at <math>p=0</math> the function value and gradient match <math>f_k</math> and <math>\bigtriangledown f_k</math>, and the gradient of <math>m_{k+1}</math>should match the gradient of the objective function at the latest two iterates <math>x_k</math>and <math>x_{k+1}</math>, then we can get:


<math>\bigtriangledown m_{k+1}(-\alpha _kp_k)=\bigtriangledown f_{k+1}-\alpha _kB_{k+1}p_k=\bigtriangledown f_k </math> (1.5)  
<math>\bigtriangledown m_{k+1}(-\alpha _kp_k)=\bigtriangledown f_{k+1}-\alpha _kB_{k+1}p_k=\bigtriangledown f_k </math> (1.5)  


and with some arrangements:
and with some arrangements:


<math>B_{k+1}\alpha _k p_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k </math> (1.6)
<math>B_{k+1}\alpha _k p_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k </math> (1.6)


Define <math> s_k=x_{k+1}-x_k</math>, <math> y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math> (1.7)
Define:


So that (1.6) becomes: <math>B_{k+1}s_k=y_k </math> (1.8), which is the '''secant equation.'''
<math> s_k=x_{k+1}-x_k</math>, <math> y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math> (1.7)


To make sure <math>B_{k+1}</math> is still a symmetric positive definite matrix, we need <math>s_k^Ts_k>0</math> (1.9).
So that (1.6) becomes: <math>B_{k+1}s_k=y_k </math> (1.8), which is the '''secant equation.'''
 
To make sure <math>B_{k+1}</math> is still a symmetric positive definite matrix, we need <math>s_k^Ts_k>0</math> (1.9).


To further preserve properties of <math>B_{k+1}</math>  and determine <math>B_{k+1}</math> uniquely, we assume that among all symmetric matrices satisfying secant equation,  <math> B_{k+1}</math> is closest to the current matrix  <math> B_k</math>, which leads to a minimization problem:  
To further preserve properties of <math>B_{k+1}</math>  and determine <math>B_{k+1}</math> uniquely, we assume that among all symmetric matrices satisfying secant equation,  <math> B_{k+1}</math> is closest to the current matrix  <math> B_k</math>, which leads to a minimization problem:  


<math>B_{k+1}=\underset{B}{min}||B-B_k|| </math> (1.10)
<math>B_{k+1}=\underset{B}{min}||B-B_k|| </math> (1.10)
s.t. <math> B=B^T</math>, <math> Bs_k=y_k</math>,  
s.t. <math> B=B^T</math>, <math> Bs_k=y_k</math>,  


Line 67: Line 70:
Different matrix norms applied in (1.10) results in different quasi-Newton methods. The weighted Frobenius norm can help us get an easy solution to the minimization problem: <math> ||A||_W=||W^\frac{1}{2}AW^\frac{1}{2}|| _F</math> (1.11).
Different matrix norms applied in (1.10) results in different quasi-Newton methods. The weighted Frobenius norm can help us get an easy solution to the minimization problem: <math> ||A||_W=||W^\frac{1}{2}AW^\frac{1}{2}|| _F</math> (1.11).


The weighted matrix <math> W</math> can be any matrix that satisfies the relation <math> Wy_k=s_k</math>., where <math> W</math> can be assumed as <math> W=G_k^{-1}</math> ,  <math> G_k</math> is the mean Hessian defined by:
The weighted matrix <math> W</math> can be any matrix that satisfies the relation <math> Wy_k=s_k</math>.
<math> G_k=[calculus]</math>


We skip the procedure of solving the minimization problem (1.10) and the unique solution of (1.10) is:
We skip procedures of solving the minimization problem (1.10) and here is the unique solution of (1.10):


<math> B_{k+1}=(I-\rho y_ks_k^T)B_k(I-\rho s_ky_k^T)+\rho y_ky_k^T</math> (1.12)
<math> B_{k+1}=(I-\rho y_ks_k^T)B_k(I-\rho s_ky_k^T)+\rho y_ky_k^T</math> (1.12)


<math>\rho=\frac{1}{y_k^Ts_k}</math> (1.13)
where <math>\rho=\frac{1}{y_k^Ts_k}</math> (1.13)


Finally, we get the updated <math>B_{k+1}</math>. However, according to (1.2) and (1.3), we also need the inverse of <math>B_{k+1}</math> in next iterate.


Set <math>H_k=B_k^{-1} </math>, with Sherman-Morrison formula we can get:
To get the inverse of <math>B_{k+1}</math>, we can apply the Sherman-Morrison formula to avoid complicated calculation of inverse.


<math>H_{k+1}=H_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{H_k y_k y_k^T H_k}{y_k^T H_k y_k} </math>
Set <math>M_k=B_k^{-1} </math>, with Sherman-Morrison formula we can get:


In the DFP method, we use <math>B_k</math> to estimate the inverse of Hessian matrix
<math>M_{k+1}=M_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{M_k y_k y_k^T M_k}{y_k^T M_k y_k} </math> (1.14)


In the BFGS method, we use <math>B_k</math> to estimate the Hessian matrix
With the derivation<ref>Nocedal, Jorge, and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.</ref> above, we can now understand how do quasi-Newton methods get rid of calculating the Hessian matrix and its inverse. We can directly estimate the inverse of Hessian, and we can use (1.14) to update the approximation of the inverse of Hessian, which leads to the DFP method, or we can directly estimate the Hessian matrix, and this is the main idea in the BFGS method.


<math>B_k H_k </math>


=== DFP method ===


The DFP method, which is also known as the Davidon–Fletcher–Powell formula, is named after W.C. Davidon, Roger Fletcher, and Michael J.D. Powell. It was proposed by Davidon in 1959 first and then improved by Fletched and Powell. DFP method uses an <math>n\times n </math> symmetric positive definite matrix <math>B_k </math> to estimate the inverse of Hessian matrix and its algorithm is shown below<ref>''Davidon–Fletcher–Powell formula'', 7.June (2020). Retrieved from: https://en.wikipedia.org/wiki/Davidon%E2%80%93Fletcher%E2%80%93Powell_formula</ref>.


<math> </math>
==== DFP Algorithm ====
=== DFP Algorithm ===
 
To avoid confusion, we use <math>D</math> to represent the approximation of the inverse of the Hessian matrix.


# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>; the initial estimation of inverse Hessian matrix <math>D_0=I</math>; <math>k=0</math>.
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>; the initial estimation of inverse Hessian matrix <math>D_0=I</math>; <math>k=0</math>.
# Compute the search direction <math>d_k=-D_k\cdot g_k</math>.
# Compute the search direction <math>d_k=-D_k\cdot \bigtriangledown f_k</math>.
# Compute the step length <math>\lambda_k</math> with <math>\lambda=\arg \underset{\lambda \in \mathbb{R}}{min} f(x_k+\lambda d_k),
# Compute the step length <math>\lambda_k</math> with a line search procedure that satisfies Wolfe conditions. And then set <br /> <math>s_k={\lambda}_k d_k</math>, <br />  <math>x_{k+1}=x_k+s_k</math>
# If <math>||\bigtriangledown f_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
 
</math>,    and then set<math>s_k={\lambda}_k d_k</math>, then <math>x_{k+1}=x_k+s_k</math>
# If <math>||g_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
# Computing <math>y_k=g_{k+1}-g_k</math>.
# Computing <math>y_k=g_{k+1}-g_k</math>.
# Update the <math>D_{k+1}</math> with<math>D_{k+1}=D_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{D_k y_k y_k^T D_k}{y_k^T D_k y_k} </math>
# Update the <math>D_{k+1}</math> with <br /> <math>D_{k+1}=D_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{D_k y_k y_k^T D_k}{y_k^T D_k y_k} </math>
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.


=== BFGS Algorithm ===


# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>the initial estimation of Hessian matrix <math>B_0=I</math>;  <math>k=0</math>.
=== BFGS method ===
# Compute the search direction <math>d_k=-B_k^{-1}\cdot g_k</math>.
 
# Compute the step length <math>\lambda_k</math> with <math>\lambda=\arg \underset{\lambda \in \mathbb{R}}{min} f(x_k+\lambda d_k),
BFGS method is named for its four discoverers Broyden, Fletcher, Goldfarb, and Shanno. It is considered the most effective quasi-Newton algorithm. Unlike the DFP method, the BFGS method uses an <math>n\times n </math> symmetric positive definite matrix <math>B_k </math> to estimate the Hessian matrix<ref>''Broyden–Fletcher–Goldfarb–Shanno algorithm'', 12.Dec (2020). Retrieved from: https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm</ref>.


==== BFGS Algorithm ====


</math>,   and then set<math>s_k={\lambda}_k d_k</math>, then <math>x_{k+1}=x_k+s_k</math>
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>;  the initial estimation of Hessian matrix <math>B_0=I</math>;  <math>k=0</math>.
# If <math>||g_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
# Compute the search direction <math>d_k=-B_k^{-1}\cdot \bigtriangledown f_k</math>.
# Computing <math>y_k=g_{k+1}-g_k</math>.
# Compute the step length <math>\lambda_k</math> with a line search procedure that satisfies Wolfe conditions. And then set <br /> <math>s_k={\lambda}_k d_k</math>, <br /> <math>x_{k+1}=x_k+s_k</math>
# Update the <math>B_{k+1} </math> with<math>B_{k+1}=B_k+\frac{y_k y_k^T}{y_k^T s_k}-\frac{B_k s_k s_k^T B_k}{s_k^T B_k s_k}  </math>
# If <math>||\bigtriangledown f_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.
# Computing <math>y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math>.
# Update<math>B_{k+1}</math> with <math>B_{k+1}=B_k+\frac{y_k y_k^T}{y_k^T s_k}-\frac{B_k s_k s_k^T B_k}{s_k^T B_k s_k} </math> <br /> Since we need to update <math>B_{k+1}^{-1}</math>, we can apply the Sherman-Morrison formula to avoid complicated calculation of inverse.  <br /> With  Sherman-Morrison formula, we can update <math>B_{k+1}^{-1}</math> with <br /> <math> B_{k+1}^{-1}=(I-\rho s_ky_k^T)B_k^{-1}(I-\rho y_ks_k^T)+\rho s_ks_k^T</math> , <math>\rho=\frac{1}{y_k^Ts_k}</math>
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.<br />
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.<br />


== Numerical Example ==
== Numerical Example ==
<math>\begin{align} f(x_1, x_2) & = x_1^2 +\frac{1}{2}x_2^2+3\end{align}
The following is an example to show how to solve an unconstrained nonlinear optimization problem with the DFP method.
 
<math>\text{min }\begin{align} f(x_1, x_2) & = x_1^2 +\frac{1}{2}x_2^2+3\end{align}</math>
 
<math>x_0=(1,2)^T </math>
 
'''Step 1:'''
 
Usually, we set the approximation of the inverse of the Hessian matrix as an identity matrix with the same dimension as the Hessian matrix. In this case, <math>B_0</math> is a <math>2\times2</math> identity matrix.


<math>B_0</math>:
<math>\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}</math>


</math>
<math>\bigtriangledown f_x</math>:
<math>\begin{pmatrix}
2x_1 \\
x_2
\end{pmatrix}</math>
 
<math>\epsilon=10^{-5}</math>
 
<math>k=0</math>
 
For convenience, we can set <math>\lambda=1</math>.
 
'''Step 2:'''
 
<math>d_0=-B_0^{-1}\bigtriangledown f_0</math><math>=-\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}</math><math>\begin{pmatrix}
2 \\
2
\end{pmatrix}</math>
<math>=\begin{pmatrix}
-2 \\
-2
\end{pmatrix}</math>
 
'''Step 3:'''
 
<math>s_0=d_0</math>
 
<math>x_1=x_0+s_0</math><math>=\begin{pmatrix}
1 \\
2
\end{pmatrix}</math><math>+\begin{pmatrix}
-2 \\
-2
\end{pmatrix}</math><math>=\begin{pmatrix}
-1 \\
0
\end{pmatrix}</math>
 
'''Step 4:'''
 
<math>\bigtriangledown f_0</math><math>=\begin{pmatrix}
-2 \\
0
\end{pmatrix}</math>
 
Since <math>|\bigtriangledown f_0|</math> is not less than <math>\epsilon</math>, we need to continue.
 
'''Step 5:'''
 
<math>y_0=\bigtriangledown f_1-\bigtriangledown f_0</math><math>=\begin{pmatrix}
-4 \\
-2
\end{pmatrix}</math>
 
'''Step 6:'''
<math>B_1=B_0+\frac{s_0 s_0^T}{s_0^T y_0}-\frac{D_0 y_0 y_0^T D_0}{y_0^T D_0 y_0} </math><math>=\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}</math><math>+\frac{1}{12}\begin{pmatrix}
4 & 4 \\
4 & 4
\end{pmatrix}</math><math>-\frac{1}{20}\begin{pmatrix}
16 & 8 \\
8 & 4
\end{pmatrix}</math> <math>=\begin{pmatrix}
0.53333 & -0.0667 \\
-0.0667 & 1.1333
\end{pmatrix}</math>
 
And then go back to Step 2 with the update <math>B_1</math> to start a new iterate until <math>|\bigtriangledown f_k|<\epsilon</math>.
 
We continue the rest of the steps in python and the results are listed below:
 
Iteration times: 0 Result:[-1.  0.]
 
Iteration times: 1 Result:[ 0.06666667 -0.13333333]
 
Iteration times: 2 Result:[0.00083175 0.01330805]
 
Iteration times: 3 Result:[-0.00018037 -0.00016196]
 
Iteration times: 4 Result:[ 3.74e-06 -5.60e-07]
 
After four times of iteration, we finally get the optimal solution, which can be assumed as <math>x_1=0, x_2=0</math> and the minimum of the objective function is 3.
 
As we can see from the calculation in Step 6, though the updated formula for <math>B_1</math> looks complicated, it's actually not. We can see results of <math>s_0^T y_0</math> and <math>y_0^T D_0 y_0</math> are constant numbers and results of <math>s_0 s_0^T</math> and <math>D_0 y_0 y_0^T D_0</math> are matrix that with the same dimension as <math>B_1</math>. Therefore, the calculation of quasi-Newton methods is faster and simpler since it's related to some basic matrix calculations like inner product and outer product.


== Application ==
== Application ==
Quasi-newton methods are applied to various areas such as physics, biology, engineering, geophysics, chemistry, and industry to solve the nonlinear systems of equations because of their faster calculation. '''''The ICUM (Inverse Column-Updating Method)''''', one type of quasi-Newton methods, is not only efficient in solving large scale sparse nonlinear systems but also perfumes well in not necessarily large-scale systems in real applications. It is used to solve '''''the Two-pint ray tracing problem''''' in geophysics. A two-point ray tracing problem consists of constructing a ray that joins two given points in the domain and it can be formulated as a nonlinear system. ICUM can also be applied to '''''estimate the transmission coefficients for AIDS and for Tuberculosis''''' in Biology, and in '''''Multiple target 3D location airborne ultrasonic system'''''. <ref>Pérez, Rosana, and Véra Lucia Rocha Lopes. "Recent applications and numerical implementation of quasi-Newton methods for solving nonlinear systems of equations." Numerical Algorithms 35.2-4 (2004): 261-285.</ref>
Moreover, they can be applied and developed into the Deep Learning area as sampled quasi-Newton methods to help make use of more reliable information.<ref> Berahas, Albert S., Majid Jahani, and Martin Takáč. "Quasi-newton methods for deep learning: Forget the past, just sample." arXiv preprint arXiv:1901.09997 (2019). </ref> The methods they proposed sample points randomly around the current iterate at each iteration to create Hessian or inverse Hessian approximations, which is different from the classical variants of quasi-Newton methods. As a result, the approximations constructed make use of more reliable (recent and local) information and do not depend on past iterate information that could be significantly stale. In their work, numerical tests on a toy classification problem and on popular benchmarking neural network training tasks show that the methods outperform their classical variants.
Besides, to make quasi-Newton methods more available, they are integrated into programming languages so that people can use them to solve nonlinear optimization problems conveniently, for example, [http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationQuasiNewtonMethods.html Mathematic (quasi-Newton solvers)], [http://www.mathworks.com/help/toolbox/optim/ug/fminunc.html MATLAB (Optimization Toolbox)], [http://finzi.psych.upenn.edu/R/library/stats/html/optim.html R], [http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html SciPy] extension to Python.


== Conclusion ==
== Conclusion ==
Quasi-Newton methods are a milestone in solving nonlinear optimization problems, they are more efficient than Newton's method in large-scale optimization problems because they don't need to compute second derivatives, which makes calculation less costly. Because of their efficiency, they can be applied to different areas and remain appealing.


== References ==
== References ==
#

Latest revision as of 06:21, 21 December 2020

Author: Jianmin Su (ChemE 6800 Fall 2020)

Quasi-Newton Methods are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative to Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.

Introduction

The first quasi-Newton algorithm was developed by W.C. Davidon in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performance of computers. Thus he managed to build the quasi-Newton method to solve it. Later, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods.

During the following years, numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963), and the BFGS method (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970)[1].

In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function to find the roots of the first derivative (solutions to ), also known as the stationary points of [2].

The iteration of Newton's method is usually written as: , where is the iteration number, is the Hessian matrix and

Iteraton would stop when it satisfies the convergence criteria like

Though we can solve an optimization problem quickly with Newton's method, it has two obvious disadvantages:

  1. The objective function must be twice-differentiable, and the Hessian matrix must be positive definite.
  2. The calculation is costly because it requires to compute the Jacobian matrix, Hessian matrix, and its inverse, which is time-consuming when dealing with a large-scale optimization problem.

However, we can use Quasi-Newton methods to avoid these two disadvantages.

Quasi-Newton methods are similar to Newton's method, but with one key idea that is different, they don't calculate the Hessian matrix. They introduce a matrix to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of the Hessian matrix and its inverse. And there are many variants of quasi-Newton methods that simply depend on the exact methods they use to estimate the Hessian matrix.

Theory and Algorithm

To illustrate the basic idea behind quasi-Newton methods, we start with building a quadratic model of the objective function at the current iterate :

(1.1),

where is an symmetric positive definite matrix that will be updated at every iteration.

The minimizer of this convex quadratic model is:

(1.2),

which is also used as the search direction.

Then the new iterate could be written as: (1.3),

where is the step length that should satisfy the Wolfe conditions. The iteration is similar to Newton's method, but we use the approximate Hessian instead of the true Hessian.

To maintain the curve information we got from the previous iteration in , we generate a new iterate and new quadratic modelto in the form of:

(1.4).

To construct the relationship between 1.1 and 1.4, we require that in 1.1 at the function value and gradient match and , and the gradient of should match the gradient of the objective function at the latest two iterates and , then we can get:

(1.5)

and with some arrangements:

(1.6)

Define:

, (1.7)

So that (1.6) becomes: (1.8), which is the secant equation.

To make sure is still a symmetric positive definite matrix, we need (1.9).

To further preserve properties of and determine uniquely, we assume that among all symmetric matrices satisfying secant equation, is closest to the current matrix , which leads to a minimization problem:

(1.10) s.t. , ,

where and satisfy (1.9) and is symmetric and positive definite.

Different matrix norms applied in (1.10) results in different quasi-Newton methods. The weighted Frobenius norm can help us get an easy solution to the minimization problem: (1.11).

The weighted matrix can be any matrix that satisfies the relation .

We skip procedures of solving the minimization problem (1.10) and here is the unique solution of (1.10):

(1.12)

where (1.13)

Finally, we get the updated . However, according to (1.2) and (1.3), we also need the inverse of in next iterate.

To get the inverse of , we can apply the Sherman-Morrison formula to avoid complicated calculation of inverse.

Set , with Sherman-Morrison formula we can get:

(1.14)

With the derivation[3] above, we can now understand how do quasi-Newton methods get rid of calculating the Hessian matrix and its inverse. We can directly estimate the inverse of Hessian, and we can use (1.14) to update the approximation of the inverse of Hessian, which leads to the DFP method, or we can directly estimate the Hessian matrix, and this is the main idea in the BFGS method.


DFP method

The DFP method, which is also known as the Davidon–Fletcher–Powell formula, is named after W.C. Davidon, Roger Fletcher, and Michael J.D. Powell. It was proposed by Davidon in 1959 first and then improved by Fletched and Powell. DFP method uses an symmetric positive definite matrix to estimate the inverse of Hessian matrix and its algorithm is shown below[4].

DFP Algorithm

To avoid confusion, we use to represent the approximation of the inverse of the Hessian matrix.

  1. Given the starting point ; convergence tolerance ; the initial estimation of inverse Hessian matrix ; .
  2. Compute the search direction .
  3. Compute the step length with a line search procedure that satisfies Wolfe conditions. And then set
    ,
  4. If , then end of the iteration, otherwise continue step5.
  5. Computing .
  6. Update the with
  7. Update with and go back to step2.


BFGS method

BFGS method is named for its four discoverers Broyden, Fletcher, Goldfarb, and Shanno. It is considered the most effective quasi-Newton algorithm. Unlike the DFP method, the BFGS method uses an symmetric positive definite matrix to estimate the Hessian matrix[5].

BFGS Algorithm

  1. Given the starting point ; convergence tolerance ; the initial estimation of Hessian matrix ; .
  2. Compute the search direction .
  3. Compute the step length with a line search procedure that satisfies Wolfe conditions. And then set
    ,
  4. If , then end of the iteration, otherwise continue step5.
  5. Computing .
  6. Update with
    Since we need to update , we can apply the Sherman-Morrison formula to avoid complicated calculation of inverse.
    With Sherman-Morrison formula, we can update with
    ,
  7. Update with and go back to step2.

Numerical Example

The following is an example to show how to solve an unconstrained nonlinear optimization problem with the DFP method.

Step 1:

Usually, we set the approximation of the inverse of the Hessian matrix as an identity matrix with the same dimension as the Hessian matrix. In this case, is a identity matrix.

:

:

For convenience, we can set .

Step 2:

Step 3:

Step 4:

Since is not less than , we need to continue.

Step 5:

Step 6:

And then go back to Step 2 with the update to start a new iterate until .

We continue the rest of the steps in python and the results are listed below:

Iteration times: 0 Result:[-1. 0.]

Iteration times: 1 Result:[ 0.06666667 -0.13333333]

Iteration times: 2 Result:[0.00083175 0.01330805]

Iteration times: 3 Result:[-0.00018037 -0.00016196]

Iteration times: 4 Result:[ 3.74e-06 -5.60e-07]

After four times of iteration, we finally get the optimal solution, which can be assumed as and the minimum of the objective function is 3.

As we can see from the calculation in Step 6, though the updated formula for looks complicated, it's actually not. We can see results of and are constant numbers and results of and are matrix that with the same dimension as . Therefore, the calculation of quasi-Newton methods is faster and simpler since it's related to some basic matrix calculations like inner product and outer product.

Application

Quasi-newton methods are applied to various areas such as physics, biology, engineering, geophysics, chemistry, and industry to solve the nonlinear systems of equations because of their faster calculation. The ICUM (Inverse Column-Updating Method), one type of quasi-Newton methods, is not only efficient in solving large scale sparse nonlinear systems but also perfumes well in not necessarily large-scale systems in real applications. It is used to solve the Two-pint ray tracing problem in geophysics. A two-point ray tracing problem consists of constructing a ray that joins two given points in the domain and it can be formulated as a nonlinear system. ICUM can also be applied to estimate the transmission coefficients for AIDS and for Tuberculosis in Biology, and in Multiple target 3D location airborne ultrasonic system. [6]

Moreover, they can be applied and developed into the Deep Learning area as sampled quasi-Newton methods to help make use of more reliable information.[7] The methods they proposed sample points randomly around the current iterate at each iteration to create Hessian or inverse Hessian approximations, which is different from the classical variants of quasi-Newton methods. As a result, the approximations constructed make use of more reliable (recent and local) information and do not depend on past iterate information that could be significantly stale. In their work, numerical tests on a toy classification problem and on popular benchmarking neural network training tasks show that the methods outperform their classical variants.

Besides, to make quasi-Newton methods more available, they are integrated into programming languages so that people can use them to solve nonlinear optimization problems conveniently, for example, Mathematic (quasi-Newton solvers), MATLAB (Optimization Toolbox), R, SciPy extension to Python.

Conclusion

Quasi-Newton methods are a milestone in solving nonlinear optimization problems, they are more efficient than Newton's method in large-scale optimization problems because they don't need to compute second derivatives, which makes calculation less costly. Because of their efficiency, they can be applied to different areas and remain appealing.

References

  1. Hennig, Philipp, and Martin Kiefel. "Quasi-Newton method: A new direction." Journal of Machine Learning Research 14.Mar (2013): 843-865.
  2. Newton’s Method, 8.Dec (2020)‎. Retrieved from: https://en.wikipedia.org/wiki/Quasi-Newton_method
  3. Nocedal, Jorge, and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.
  4. Davidon–Fletcher–Powell formula, 7.June (2020). Retrieved from: https://en.wikipedia.org/wiki/Davidon%E2%80%93Fletcher%E2%80%93Powell_formula
  5. Broyden–Fletcher–Goldfarb–Shanno algorithm, 12.Dec (2020). Retrieved from: https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm
  6. Pérez, Rosana, and Véra Lucia Rocha Lopes. "Recent applications and numerical implementation of quasi-Newton methods for solving nonlinear systems of equations." Numerical Algorithms 35.2-4 (2004): 261-285.
  7. Berahas, Albert S., Majid Jahani, and Martin Takáč. "Quasi-newton methods for deep learning: Forget the past, just sample." arXiv preprint arXiv:1901.09997 (2019).