Conjugate gradient methods
Author: Alexandra Roberts, Anye Shi, Yue Sun (SYSEN6800 Fall 2021)
Introduction
The conjugate gradient method (CG) was originally invented to minimize a quadratic function:
where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors.
The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when , i.e.
The conjugate gradient method is often implemented as an iterative algorithm and can be considered as being between Newton’s method, a second-order method that incorporates Hessian and gradient, and the method of steepest descent, a first-order method that uses gradient [1]. Newton’s Method usually reduces the number of iterations needed, but the calculation of the Hessian matrix and its inverse increases the computation required for each iteration. Steepest descent takes repeated steps in the opposite direction of the gradient of the function at the current point. It often takes steps in the same direction as earlier ones, resulting in slow convergence (Figure 1). To avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent, the conjugate gradient method was developed.
Theory
The conjugate gradient method
numerical example
Application
Conclusion
Reference
“Conjugate gradient method,” Wikipedia. Nov. 25, 2021. Accessed: Nov. 26, 2021. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Conjugate_gradient_method&oldid=1057033318
Jonathan Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” 1994.
- ↑ [1]