Conjugate gradient methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 70: Line 70:
Here '''Alg 1''' is with a particular choice of <math>\left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}</math>. Let <math>\textbf{g}_{k} = \textbf{A}\textbf{x}_{k} - \textbf{b}</math> be the gradient at <math>\textbf{x}_k</math>. A practical way to enforce this is by requiring that the next search direction be built out of the current gradient and all previous search directions. The CG method picks <math>\textbf{d}_{k+1}</math> as the component of <math>\textbf{g}_k</math> '''A'''-conjugate to <math>\left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}</math>:<br>
Here '''Alg 1''' is with a particular choice of <math>\left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}</math>. Let <math>\textbf{g}_{k} = \textbf{A}\textbf{x}_{k} - \textbf{b}</math> be the gradient at <math>\textbf{x}_k</math>. A practical way to enforce this is by requiring that the next search direction be built out of the current gradient and all previous search directions. The CG method picks <math>\textbf{d}_{k+1}</math> as the component of <math>\textbf{g}_k</math> '''A'''-conjugate to <math>\left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}</math>:<br>


<math>\textbf{d}_{k+1} = \textbf{g}_k-\sum_{i=0}^{k}\frac{\textbf{g}_{k}^{T}\textbf{A}\textbf{d}_{i}}{\textbf{d}_{i}^{T}\textbf{A}\textbf{d}_{i}}\textbf{d}_i</math><br>
<math>\textbf{d}_{k+1} = \textbf{g}_k-\sum_{i=0}^{k}\frac{\textbf{g}_{k}^{T}\textbf{A}\textbf{d}_{i}}{\textbf{d}_{i}^{T}\textbf{A}\textbf{d}_{i}}\textbf{d}_i</math><br><br>


As <math>\textbf{g}_{k}^{T}\textbf{A}\textbf{d}_{i} = 0</math>, for ''i = 1,...,k'', giving the following CG algorithm:<br>
As <math>\textbf{g}_{k}^{T}\textbf{A}\textbf{d}_{i} = 0</math>, for ''i = 1,...,k'', giving the following CG algorithm:<br>
'''Alg 2''': From a random <math>\text{x}_0</math>,<br>
'''Alg 2''': From a random <math>\text{x}_0</math>,<br>
For ''k = 1'' to ''n'' <br>
For ''k = 1'' to ''n'' <br>
Line 89: Line 87:


The formulas in the '''Alg 2''' can be simplified as the following:<br>
The formulas in the '''Alg 2''' can be simplified as the following:<br>
$$\textbf{x}_i = \textbf{x}_{i-1}+\alpha_i\textbf{d}_i$$<br>
<math>\textbf{x}_i = \textbf{x}_{i-1}+\alpha_i\textbf{d}_i</math><br>
$$\textbf{b}-\textbf{A}\textbf{x}_i = \textbf{b}-\textbf{A}\textbf{x}_{i-1}-\alpha_i\textbf{A}\textbf{d}_i$$<br>
<math>\textbf{b}-\textbf{A}\textbf{x}_i = \textbf{b}-\textbf{A}\textbf{x}_{i-1}-\alpha_i\textbf{A}\textbf{d}_i</math><br>
$$\textbf{g}_i = \textbf{g}_{i-1}-\alpha_i\textbf{A}\textbf{d}_i$$<br><br>
<math>\textbf{g}_i = \textbf{g}_{i-1}-\alpha_i\textbf{A}\textbf{d}_i</math><br>
 
Then <math>\beta_i</math> and <math>\alpha_i</math> can be simplified by multiplying the above gradient formula by <math>\textbf{g}_i</math> and <math>\textbf{g}_{i-1}</math> as the following:<br>
Then $$\beta_i$$ and $$\alpha_i$$ can be simplified by multiplying the above gradient formula by $$\textbf{g}_i$$ and $$\textbf{g}_{i-1}$$ as the following:<br>
<math>\textbf{g}_{i}^{T}\textbf{g}_i = -\alpha_i\textbf{g}_{i}^{T}\textbf{A}\textbf{d}_i</math><br>
$$\textbf{g}_{i}^{T}\textbf{g}_i = -\alpha_i\textbf{g}_{i}^{T}\textbf{A}\textbf{d}_i$$<br>
<math>\textbf{g}_{i-1}^{T}\textbf{g}_{i-1} = \alpha_i\textbf{g}_{i-1}^{T}\textbf{A}\textbf{d}_i</math><br>
$$\textbf{g}_{i-1}^{T}\textbf{g}_{i-1} = \alpha_i\textbf{g}_{i-1}^{T}\textbf{A}\textbf{d}_i$$<br>
As <math>\textbf{g}_{i-1} = \textbf{d}_i+\beta_i\textbf{d}_{i-1}</math>,<br>
As $$\textbf{g}_{i-1} = \textbf{d}_i+\beta_i\textbf{d}_{i-1}$$, so we have<br>
so we have<br>
$$\textbf{g}_{i-1}^{T}\textbf{g}_{i-1} = \alpha_i\textbf{g}_{i-1}^{T}\textbf{A}\textbf{d}_i=\alpha_i\textbf{d}_{i}^{T}\textbf{A}\textbf{d}_i$$<br>
<math>\textbf{g}_{i-1}^{T}\textbf{g}_{i-1} = \alpha_i\textbf{g}_{i-1}^{T}\textbf{A}\textbf{d}_i=\alpha_i\textbf{d}_{i}^{T}\textbf{A}\textbf{d}_i</math><br>
Therefore <br>
Therefore <br>
$$\beta_{i+1} = -\frac{\textbf{g}_{i}^{T}\textbf{g}_{i}}{\textbf{g}_{i-1}^{T}\textbf{g}_{i-1}}$$<br><br>
<math>\beta_{i+1} = -\frac{\textbf{g}_{i}^{T}\textbf{g}_{i}}{\textbf{g}_{i-1}^{T}\textbf{g}_{i-1}}</math><br>
 
This gives the following simplified version of '''Alg 2''':<br>
This gives the following simplified version of '''Alg 2''':<br>
'''Alg 3''': From a random <math>\textbf{x}_0</math>, and set <math>\textbf{g}_0 = \textbf{b} - \textbf{A}\textbf{x}_0</math>,<br>
For k = 1 to n<br>
{<br>
#if <math>\textbf{g}_{k-1} = 0</math> return <math>\textbf{x}_{k-1}</math>;<br>
#if (''k > 1'') <math>\beta_k = -(\textbf{g}_{k-1}^{T}\textbf{g}_{k-1})/(\textbf{g}_{k-2}^{T}\textbf{g}_{k-2})</math>;<br>
#if (''k = 1'') <math>\textbf{d}_k = \textbf{g}_0</math>;<br>
#else <math>\textbf{d}_{k} = \textbf{g}_{k-1}-\beta_k\textbf{d}_{k-1}</math>;<br>
#<math>\alpha_k=(\textbf{g}_{k-1}^{T}\textbf{g}_{k-1})/(\textbf{d}_{k}^{T}\textbf{A}\textbf{d}_k)</math>;<br>
#<math>\textbf{x}_k = \textbf{x}_{i-1} + \alpha_i\textbf{d}_i</math><br>
#<math>\textbf{g}_{i}=\textbf{g}_{i-1}-\alpha_i\textbf{A}\textbf{d}_i</math>;<br>
}<br>
Return <math>\textbf{x}_n</math><br>


== Numerical example ==
== Numerical example ==

Revision as of 03:03, 28 November 2021

Author: Alexandra Roberts, Anye Shi, Yue Sun (SYSEN6800 Fall 2021)

Introduction

The conjugate gradient method (CG) was originally invented to minimize a quadratic function:

where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors.
The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when , i.e.

The conjugate gradient method is often implemented as an iterative algorithm and can be considered as being between Newton’s method, a second-order method that incorporates Hessian and gradient, and the method of steepest descent, a first-order method that uses gradient [1]. Newton’s Method usually reduces the number of iterations needed, but the calculation of the Hessian matrix and its inverse increases the computation required for each iteration. Steepest descent takes repeated steps in the opposite direction of the gradient of the function at the current point. It often takes steps in the same direction as earlier ones, resulting in slow convergence (Figure 1). To avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent, the conjugate gradient method was developed.

The idea of the CG method is to pick n orthogonal search directions first and, in each search direction, take exactly one step such that the step size is to the proposed solution x at that direction. The solution is reached after n steps as, theoretically, the number of iterations needed by the CG method is equal to the number of different eigenvalues of A, i.e. at most n. This makes it attractive for large and sparse problems. The method can be used to solve least-squares problems and can also be generalized to a minimization method for general smooth functions[2].

Theory

The definition of A-conjugate direction

Let A be a symmetric positive definite matrix. are the vectors that orthogonal (conjugate) to each other with respect to A if
.

Note that if A = 0, any two vectors will be conjugated to each other. If A = I, conjugacy is equivalent to the conventional notion of orthogonality. If are A-conjugated to each other, then the set of vectors are linearly independent.

The motivation of A-conjugacy

As is a set of n A-conjugate vectors, then can be used as a basis and express the solution x* to is:


Then multiplying dKT on both sides

Because and the A-conjugacy of , i.e. , the multiplication will cancel out all the terms except for term k


Then the solution x* will be

Because A is a symmetric and positive-definite matrix, so the term defines an inner product and, therefore, no need to calculate the inversion of matrix A.

Conjugate Direction Theorem

Let be a set of n A-conjugate vectors, be a random starting point. Then



After n steps, xn = x*.

Proof:
Given



Therefore






The conjugate gradient method

The conjugate gradient method is a conjugate direction method in which selected successive direction vectors are treated as a conjugate version of the successive gradients obtained while the method progresses. The conjugate directions are not specified beforehand but rather are determined sequentially at each step of the iteration[3]. If the conjugate vectors are carefully chosen, then not all the conjugate vectors may be needed to obtain the solution. Therefore, the conjugate gradient method is regarded as an iterative method. This also allows approximate solutions to systems where n is so large that the direct method requires too much time.

Algorithm

Given be a set of n A-conjugate vectors, then can be minimized by stepping from along to the minimum , stepping from along to the minimum , etc. And let be randomly chosen, then the algorithm is the following:

Alg 1: Pick mutually A-conjugate, and from a random ,
For k = 1 to n
{

  1. ;
  2. ;

}
Return

Here Alg 1 is with a particular choice of . Let be the gradient at . A practical way to enforce this is by requiring that the next search direction be built out of the current gradient and all previous search directions. The CG method picks as the component of A-conjugate to :



As , for i = 1,...,k, giving the following CG algorithm:
Alg 2: From a random ,
For k = 1 to n
{

  1. ;
  2. if return ;
  3. if (k > 1) ;
  4. if (k = 1) ;
  5. else ;
  6. ;
  7. ;

}
Return

The formulas in the Alg 2 can be simplified as the following:



Then and can be simplified by multiplying the above gradient formula by and as the following:


As ,
so we have

Therefore

This gives the following simplified version of Alg 2:
Alg 3: From a random , and set ,
For k = 1 to n
{

  1. if return ;
  2. if (k > 1) ;
  3. if (k = 1) ;
  4. else ;
  5. ;

  6. ;

}
Return

Numerical example

Application

Conclusion

Reference

Jonathan Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” 1994.

  1. “Conjugate gradient method,” Wikipedia. Nov. 25, 2021. Accessed: Nov. 26, 2021. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Conjugate_gradient_method&oldid=1057033318
  2. W. Stuetzle, “The Conjugate Gradient Method.” 2001. [Online]. Available: https://sites.stat.washington.edu/wxs/Stat538-w03/conjugate-gradients.pdf
  3. A. Singh and P. Ravikumar, “Conjugate Gradient Descent.” 2012. [Online]. Available: http://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/conjugate_direction_methods.pdf