Conjugate gradient methods

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Author: Alexandra Roberts, Anye Shi, Yue Sun (SYSEN 6800 Fall 2021)

Introduction

Figure 1. A comparison of the convergence of gradient descent (in red) and conjugate vector (in green) for minimizing a quadratic function. In theory, the conjugate gradient method converges in at most steps, where is the size of the matrix of the system (here ).[1]

The conjugate gradient method (CG) was originally invented to minimize a quadratic function:

 

where is an symmetric positive definite matrix, and are vectors.
The solution to the minimization problem is equivalent to solving the linear system, i.e. determining when , i.e. .

The conjugate gradient method is often implemented as an iterative algorithm and can be considered as being between Newton’s method, a second-order method that incorporates Hessian and gradient, and the method of steepest descent, a first-order method that uses gradient. Newton's Method usually reduces the number of iterations needed, but the calculation of the Hessian matrix and its inverse increases the computation required for each iteration. Steepest descent takes repeated steps in the opposite direction of the gradient of the function at the current point. It often takes steps in the same direction as earlier ones, resulting in slow convergence (Figure 1). To avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent, the conjugate gradient method was developed.

The idea of the CG method is to pick orthogonal search directions first and, in each search direction, take exactly one step such that the step size is to the proposed solution at that direction. The solution is reached after steps [2] as, theoretically, the number of iterations needed by the CG method is equal to the number of different eigenvalues of , i.e. at most . This makes it attractive for large and sparse problems. The method can be used to solve least-squares problems and can also be generalized to a minimization method for general smooth functions [2].

Theory

The definition of A-conjugate direction

Figure 2. The method of conjugate directions. Here, we use the coordinate axes as search directions.[3]

Let be a symmetric positive definite matrix. are the search directional vectors that are orthogonal (conjugate) to each other with respect to if . Note that if , any two vectors will be conjugated to each other. If , conjugacy is equivalent to the conventional notion of orthogonality. If are -conjugated to each other, then the set of vectors are linearly independent.

To illustrate the idea of conjugacy, the coordinate axes can be specified as search directions (Figure 2). The first step (horizontal) leads to the correct coordinate of the solution . The second step (vertical) will reach the correct coordinate of the solution and then finish searching. We define that the error is a vector that indicates how far is from the solution. The residual indicates how far is from the correct value of . Notice that is orthogonal to . In general, for each step we choose a point

 .

To find the value of , use the fact that should be orthogonal to so that no need to step in the direction of again in the later search. Using this condition, we have

 
 
 .

The motivation of A-conjugacy[4]

Notice from the above equation that the error should be given first to get . If is known, it would mean that the solution is already known, which is problematic. So, to compute without knowing , let be a set of -conjugate vectors, then can be used as a basis and express the solution to is:

 

Then conduct transformation on both sides

 

Then multiply on both sides

 

Because and the -conjugacy of , i.e. , the multiplication will cancel out all the terms except for term

 

So can be obtained as the following:

 

Then the solution will be

 .

It takes steps to reach . Additionally, because is a symmetric and positive-definite matrix, so the term defines an inner product and, therefore, no need to calculate the inversion of matrix .

Conjugate Direction Theorem

To demonstrates that the above procedure does compute in steps, we introduce the conjugate direction theorem. Similar as the above, let be a set of -conjugate vectors, be a random starting point. Then the theorem is as the following:

 
 
 

After steps, .

Proof:
The error from the starting point to the solution can be formulated as the following:

 

And the traversed steps from the starting point to the point can be expressed as the following:

 

The residual term from the solution point to the point can be expressed as the following:

 

Therefore, the is as the following:

 
 

But, still, the solution has to be known first to calculate . So, the following manipulations can get rid of using in the above expression:

 
   

Therefore

 

The conjugate gradient method

The conjugate gradient method is a conjugate direction method in which selected successive direction vectors are treated as a conjugate version of the successive gradients obtained while the method progresses. The conjugate directions are not specified beforehand but rather are determined sequentially at each step of the iteration [4]. If the conjugate vectors are carefully chosen, then not all the conjugate vectors may be needed to obtain the solution. Therefore, the conjugate gradient method is regarded as an iterative method. This also allows approximate solutions to systems where n is so large that the direct method requires too much time [2].

Algorithm[2]

Given be a set of n A-conjugate vectors, then can be minimized by stepping from along to the minimum , stepping from along to the minimum , etc. And let be randomly chosen, then the algorithm is the following:

Alg 1: Pick mutually -conjugate, and start from a random

Set a starting point

Set

Set the maximum iteration

While :

If :
Break
End if

End while

Return ,

Alg 1: Pick mutually A-conjugate, and from a random ,
For k = 1 to n
{

  1. ;
  2. ;

}
Return

Here Alg 1 is with a particular choice of . Let be the gradient at . A practical way to enforce this is by requiring that the next search direction be built out of the current gradient and all previous search directions. The CG method picks as the component of A-conjugate to :



As , for i = 1,...,k, giving the following CG algorithm:
Alg 2: From a random ,
For k = 1 to n
{

  1. ;
  2. if return ;
  3. if (k > 1) ;
  4. if (k = 1) ;
  5. else ;
  6. ;
  7. ;

}
Return

The formulas in the Alg 2 can be simplified as the following:



Then and can be simplified by multiplying the above gradient formula by and as the following:


As ,
so we have

Therefore

This gives the following simplified version of Alg 2:
Alg 3: From a random , and set ,
For k = 1 to n
{

  1. if return ;
  2. if (k > 1) ;
  3. if (k = 1) ;
  4. else ;
  5. ;

  6. ;

}
Return

Numerical example

Consider the linear system

 .

The initial starting point is set to be

 .

Implement the conjugate gradient method to approximate the solution to the system.

Solution:
The exact solution is given below for later reference:

 .

Step 1:

 

Step 2:

 

Step 3:

 

Step 4:

 

Step 5:

 

Step 6:

 

Step 7:

 

Step 8:

 

Therefore, is the approximation result of the system.

Application

Conjugate gradient methods have often been used to solve a wide variety of numerical problems, including linear and nonlinear algebraic equations, eigenvalue problems and minimization problems. These applications have been similar in that they involve large numbers of variables or dimensions. In these circumstances any method of solution which involves storing a full matrix of this large order, becomes inapplicable. Thus recourse to the conjugate gradient method may be the only alternative [5]. Here we show an example of image reconstruction.

Iterative image reconstruction

The conjugate gradient method is used to solve for the update in iterative image reconstruction problems. For example, in the magnetic resonance imaging (MRI) contrast known as quantitative susceptibility mapping (QSM), the reconstructed image is iteratively solved for from magnetic field data by the relation[6]

 

Where is the matrix expressing convolution with the dipole kernel in the Fourier domain. Given that the problem is ill-posed, a physical prior is used in the reconstruction, which is framed as a constrained L1 norm minimization

 
 

A detailed treatment of the function and can be found at [6]. This problem can be expressed as an unconstrained minimization problem via the Lagrange Multiplier Method

 

Where

 

The first-order conditions require and . These conditions result in and , respectively. The update can be solved for via fixed point iteration [6].

 

And expressed as the quasi-Newton problem, more robust to round-off error [6]

 

Which is solved with the CG method until the residual where is a specified tolerance, such as .

Conclusion

The conjugate gradient method was invented to avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent. As an iterative method, each step only requires multiplication free from the storage of matrix . And selected direction vectors are treated as a conjugate version of the successive gradients obtained while the method progresses. So it monotonically improves approximations to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations in the absence of round-off error, which makes it widely used for solving large and sparse problems.

Reference

  1. Refsnæs, Runar Heggelien. “A Brief introduction to the conjugate gradient method.” (2009).
  2. 2.0 2.1 2.2 2.3 W. Stuetzle, “The Conjugate Gradient Method.” 2001. [Online]. Available: https://sites.stat.washington.edu/wxs/Stat538-w03/conjugate-gradients.pdf
  3. Jonathan Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” 1994.
  4. 4.0 4.1 A. Singh and P. Ravikumar, “Conjugate Gradient Descent.” 2012. [Online]. Available: http://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/conjugate_direction_methods.pdf
  5. R. Fletcher, “Conjugate gradient methods for indefinite systems,” in Numerical Analysis, Berlin, Heidelberg, 1976, pp. 73–89. doi: 10.1007/BFb0080116.
  6. 6.0 6.1 6.2 6.3 J. Liu et al., “Morphology enabled dipole inversion for quantitative susceptibility mapping using structural consistency between the magnitude image and the susceptibility map,” NeuroImage, vol. 59, no. 3, pp. 2560–2568, Feb. 2012, doi: 10.1016/j.neuroimage.2011.08.082.