Conjugate gradient methods

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Author: Alexandra Roberts, Anye Shi, Yue Sun (CHEME/SYSEN 6800, Fall 2021)

Introduction

Figure 1. A comparison of the convergence of gradient descent (in red) and conjugate vector (in green) for minimizing a quadratic function. In theory, the conjugate gradient method converges in at most steps, where is the size of the matrix of the system (here ).[1]

The conjugate gradient method (CG) was originally invented to minimize a quadratic function:

 

where is an symmetric positive definite matrix, and are vectors. The solution to the minimization problem is equivalent to solving the linear system, i.e. determining when , i.e. .

The conjugate gradient method is often implemented as an iterative algorithm and can be considered as being between Newton’s method, a second-order method that incorporates Hessian and gradient, and the method of steepest descent, a first-order method that uses gradient. Newton's Method usually reduces the number of iterations needed, but the calculation of the Hessian matrix and its inverse increases the computation required for each iteration. Steepest descent takes repeated steps in the opposite direction of the gradient of the function at the current point. It often takes steps in the same direction as earlier ones, resulting in slow convergence (Figure 1). To avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent, the conjugate gradient method was developed.

The idea of the CG method is to pick orthogonal search directions first and, in each search direction, take exactly one step such that the step size is orthogonal to the proposed solution at that direction. The solution is reached after steps [2] as, theoretically, the number of iterations needed by the CG method is equal to the number of different eigenvalues of , i.e. at most . This makes it attractive for large and sparse problems. The method can be used to solve least-squares problems and can also be generalized to a minimization method for general smooth functions [2].

Theory

The definition of A-conjugate direction

Figure 2. The method of conjugate directions. Here, we use the coordinate axes as search directions.[3]

Let be a symmetric positive definite matrix. are the search directional vectors that are orthogonal (conjugate) to each other with respect to if . Note that if , any two vectors will be conjugated to each other. If , conjugacy is equivalent to the conventional notion of orthogonality. If are -conjugated to each other, then the set of vectors are linearly independent.

To illustrate the idea of conjugacy, the coordinate axes can be specified as search directions (Figure 2). The first step (horizontal) leads to the correct coordinate of the solution . The second step (vertical) will reach the correct coordinate of the solution and then finish searching. We define that the error is a vector that indicates how far is from the solution. The residual indicates how far is from the correct value of . Notice that is orthogonal to . In general, for each step we choose a point

 .

To find the value of , use the fact that should be orthogonal to so that no need to step in the direction of again in the later search. Using this condition, we have

 
 
 .

The motivation of A-conjugacy[4]

Notice from the above equation that the error should be given first to get . If is known, it would mean that the solution is already known, which is problematic. So, to compute without knowing , let be a set of -conjugate vectors, then can be used as a basis and express the solution to is:

 

Then conduct transformation on both sides

 

Then multiply on both sides

 

Because and the -conjugacy of , i.e. , the multiplication will cancel out all the terms except for term

 

So can be obtained as the following:

 

Then the solution will be

 .

It takes steps to reach . Additionally, because is a symmetric and positive-definite matrix, so the term defines an inner product and, therefore, no need to calculate the inversion of matrix .

Conjugate Direction Theorem

To demonstrates that the above procedure does compute in steps, we introduce the conjugate direction theorem. Similar as the above, let be a set of -conjugate vectors, be a random starting point. Then the theorem is as the following:

 
 
 

After steps, .

Proof:
The error from the starting point to the solution can be formulated as the following:

 

And the traversed steps from the starting point to the point can be expressed as the following:

 

The residual term from the solution point to the point can be expressed as the following:

 

Therefore, the is as the following:

 
 

But, still, the solution has to be known first to calculate . So, the following manipulations can get rid of using in the above expression:

 
   

Therefore

 

The conjugate gradient method

The conjugate gradient method is a conjugate direction method in which selected successive direction vectors are treated as a conjugate version of the successive gradients obtained while the method progresses. The conjugate directions are not specified beforehand but rather are determined sequentially at each step of the iteration [4]. If the conjugate vectors are carefully chosen, then not all the conjugate vectors may be needed to obtain the solution. Therefore, the conjugate gradient method is regarded as an iterative method. This also allows approximate solutions to systems where n is so large that the direct method requires too much time [2].

Derivation of Algorithm[2]

Given be a set of -conjugate vectors, then can be minimized by stepping from along to the minimum , stepping from along to the minimum , etc. And let be randomly chosen, then the algorithm is the following:

Set a starting point

Set

Set the maximum iteration

While :

End while

Return


Here Alg 1 is with a particular choice of . Let be the residual (equivalent to the negative of the gradient) at . A practical way to enforce this is by requiring that the next search direction be built out of the current gradient and all previous search directions. The CG method picks as the component of -conjugate to :

 

As , for , giving the following CG algorithm:

Set a starting point

Set

Set the maximum iteration

While :

if return
if (k > 1)
if (k = 1)
else

End while

Return


The formulas in the Alg 2 can be simplified as the following:

 
 
 
 
 

Then and can be simplified by multiplying the above gradient formula by and as the following:

 
 
 

As , so we have

 

Therefore

 

This gives the following simplified version of Alg 2:

Set a starting point

Set

Set

Set the maximum iteration

While :

if return
if (k > 1)
if (k = 1)
else

End while

Return

Numerical example

Consider the linear system

 .

The initial starting point is set to be

 .

Implement the conjugate gradient method to approximate the solution to the system.

Solution:
The exact solution is given below for later reference:

 .

Step 1: since this is the first iteration, use the residual vector as the initial search direction . The method of selecting will change in further iterations.

 

Step 2: the scalar can be calculated using the relationship

 

Step 3: so can be reached by the following formula, and this step finishes the first iteration, the result being an "improved" approximate solution to the system, :

 

Step 4: now move on and compute the next residual vector using the formula

 

Step 5: compute the scalar that will eventually be used to determine the next search direction .

 

Step 6: use this scalar to compute the next search direction :

 

Step 7: now compute the scalar using newly acquired by the same method as that used for .

 

Step 8: finally, find using the same method as that used to find .

 

Therefore, is the approximation result of the system.

Application

Conjugate gradient methods have often been used to solve a wide variety of numerical problems, including linear and nonlinear algebraic equations, eigenvalue problems and minimization problems. These applications have been similar in that they involve large numbers of variables or dimensions. In these circumstances any method of solution which involves storing a full matrix of this large order, becomes inapplicable. Thus recourse to the conjugate gradient method may be the only alternative [5]. Here we demonstrate three application examples of CG method.

Iterative image reconstruction

The conjugate gradient method is used to solve for the update in iterative image reconstruction problems. For example, in the magnetic resonance imaging (MRI) contrast known as quantitative susceptibility mapping (QSM), the reconstructed image is iteratively solved for from magnetic field data by the relation[6]

 

Where is the matrix expressing convolution with the dipole kernel in the Fourier domain. Given that the problem is ill-posed, a physical prior is used in the reconstruction, which is framed as a constrained L1 norm minimization

 
 

A detailed treatment of the function and can be found at [6]. This problem can be expressed as an unconstrained minimization problem via the Lagrange Multiplier Method

 

Where

 

The first-order conditions require and . These conditions result in and , respectively. The update can be solved for via fixed point iteration [6].

 

And expressed as the quasi-Newton problem, more robust to round-off error [6]

 

Which is solved with the CG method until the residual where is a specified tolerance, such as .

Additional L1 terms, such as a downsampled term [7] can be added, in which case the cost function is treated with penalty methods and the CG method is paired with Gauss-Newton to address nonlinear terms.

Facial recognition

Figure 3. The process of decomposing an image. DWT plays a significant role in reducing the dimension of an image and extract the features by decomposing an image in frequency domain into sub-bands at different scales. The DWT of an image is created as follows: in the first level of decomposition, the image is split into four sub-bands, namely HH1, HL1, LH1, and LL1, as shown in the figure. The HH1, HL1 and LH1 sub-bands represent the diagonal details, horizontal features and vertical structures of the image, respectively. The LL1 sub-band is the low resolution residual consisting of low frequency components and it is this sub-band which is further split at higher levels of decomposition.[8][9]

The realization of face recognition can be achieved by the implementation of conjugate gradient algorithms with the combination of other methods. The basic steps is to decompose images into a set of time-frequency coefficients using discrete wavelet transform (DWT) (Figure 3)[8]. Then use basic back propagation (BP) to train a neural network (NN). And to overcome the slow convergence of BP using the steepest gradient descent, conjugate gradient methods are introduced. Generally, there are four types of CG methods for training a feed-foward NN, namely, Fletcher-Reeves CG, Polak-Ribikre CG, Powell-Beale CG, and scaled CG. All the CG methods include the steps demonstrated in Alg 3 with their respective modifications[8][10].

Notice that is a scalar that varies in different versions of CG methods. for Fletcher-Reeves CG is defined as the following:

 

where and are the previous and current gradients, respectively. Fletcher Reeves suggested restarting the CG after every n or n+1 iterations. This suggestion can be very helpful in practice[10].

The second version of the CG, namely, Polak-Ribiere CG, was proposed in 1977[11]. In this algorithm, the scalar at each iteration is computed as[12]:

 

Powell has showed that Polak-Ribiere CG method has better performance compared with Fletcher-Reeves CG method in many numerical experiments[10].

The mentioned CGs, Fletcher-Reeves and Polak-Ribitre algorithms, periodically restart by using the steepest descent direction every or iterations. A disadvantage of the periodical restarting is that the immediate reduction in the objective function is usually less than it would be without restart. Therefore, Powell proposed a new method of restart based on an earlier version[10][12]. As stated by this method, when there is negligibility orthogonality left between the current gradient and the previous gradient (the following condition is satisfied), the restart occurs[10][12].

Another version of the CG, scaled CG, was proposed by Moller. This algorithm uses a step size scaling mechanism that avoids a time consuming line-search per learning iteration. Moller[13] has showed that the scaled CG method has superlinear convergence for most problems.

Conjugate gradient method in deep learning

Figure 4. Activity diagram for the training process with the conjugate gradient.[14]

The conjugate gradient method introduced hyperparameter optimization in deep learning algorithm can be regarded as something intermediate between gradient descent and Newton's method, which does not require storing, evaluating, and inverting the Hessian matrix, as it does Newton's method. Optimization search in conjugate gradient is performed along with conjugate directions. They generally produce faster convergence than gradient descent directions. Training directions are conjugated concerning the Hessian matrix.

Let’s denote as the training direction vector. Starting with an initial parameter vector and an initial training direction vector , the conjugate gradient method constructs a sequence of training direction as:

 

Here is called the called the conjugate parameter. For all conjugate gradient algorithms, the training direction is periodically reset to the negative of the gradient. The parameters are then improved according to the following expression:

 

The training rate is usually found by line minimization. Figure 4 depicts an activity diagram for the training process with the conjugate gradient. To improve the parameters, first compute the conjugate gradient training direction. Then, search for a suitable training rate in that direction. This method has proved to be more effective than gradient descent in training neural networks. Also, conjugate gradient performs well with vast neural networks since it does not require the Hessian matrix.

Conclusion

The conjugate gradient method was invented to avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent. As an iterative method, each step only requires multiplication free from the storage of matrix . And selected direction vectors are treated as a conjugate version of the successive gradients obtained while the method progresses. So it monotonically improves approximations to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations in the absence of round-off error, which makes it widely used for solving large and sparse problems. Because of the high flexibility of the method framework, variants of CG algorithms have been proposed and can be applied to a variety of applications in different fields, such as machine learning and deep learning, in order to enhance the algorithm performance.

Reference

  1. Refsnæs, Runar Heggelien. “A Brief introduction to the conjugate gradient method.” (2009).
  2. 2.0 2.1 2.2 2.3 W. Stuetzle, “The Conjugate Gradient Method.” 2001. [Online]. Available: https://sites.stat.washington.edu/wxs/Stat538-w03/conjugate-gradients.pdf
  3. Jonathan Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” 1994.
  4. 4.0 4.1 A. Singh and P. Ravikumar, “Conjugate Gradient Descent.” 2012. [Online]. Available: http://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/conjugate_direction_methods.pdf
  5. R. Fletcher, “Conjugate gradient methods for indefinite systems,” in Numerical Analysis, Berlin, Heidelberg, 1976, pp. 73–89. doi: 10.1007/BFb0080116.
  6. 6.0 6.1 6.2 6.3 T. Liu et al., “Morphology enabled dipole inversion for quantitative susceptibility mapping using structural consistency between the magnitude image and the susceptibility map,” NeuroImage, vol. 59, no. 3, pp. 2560–2568, Feb. 2012, doi: 10.1016/j.neuroimage.2011.08.082.
  7. A. Roberts, P. Spincemaille, T. Nguyen, Y. Wang, “International Society for Magnetic Resonance in Medicine,” in MEDI-d: Downsampled Morphological Priors for Shadow Reduction in Quantitative Susceptibility Mapping, 2021.
  8. 8.0 8.1 8.2 H. Azami, M. Malekzadeh, and S. Sanei, “A New Neural Network Approach for Face Recognition Based on Conjugate Gradient Algorithms and Principal Component Analysis,” Journal of Mathematics and Computer Science, vol. 6, no. 3, pp. 166–175, 2013, doi: 10.22436/jmcs.06.03.01.
  9. H. Azami, M. R. Mosavi, S. Sanei, Classification of GPS satellites using improved back propagation training algorithms, Wireless Personal Communications, Springer-Verlog, DOI 10.1007/s11277-012-0844-7 (2012)
  10. 10.0 10.1 10.2 10.3 10.4 M. H. Shaheed, Performance analysis of 4 types of conjugate gradient algorithms in the nonlinear dynamic modelling of a TRMS using feedforward neural networks, IEEE Conference on Systems, Man and Cybernetics, (2004), 5985-5990
  11. S. Sheel, T. Varshney, R. Varshney, Accelerated learning in MLP using adaptive learning rate with momentum coefficient, International Conference on Industrial and Information Systems, (2007), 307-310
  12. 12.0 12.1 12.2 Z. Zakaria, N. A. M. Isa, S. A. Suandi, A study on neural network training algorithm for multiface detection in static images, International Conference on Computer, Electrical, Systems Science, and Engineering, (2010), 170-173
  13. M. F. Moller, A scaled conjugate gradient algorithm for fast supervised learning, Neural Networks, 6 (1993), 525-533
  14. Quesada and Artelnics, “5 Algorithms to Train a Neural Network.” https://www.neuraldesigner.com/blog/5_algorithms_to_train_a_neural_network