Conjugate gradient methods

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Author: Alexandra Roberts, Anye Shi, Yue Sun (SYSEN6800 Fall 2021)

Introduction

The conjugate gradient method (CG) was originally invented to minimize a quadratic function:
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle F(\textbf{x})=\frac{1}{2}\textbf{x}^{T}\textbf{A}\textbf{x}-\textbf{b}\textbf{x}}
where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors.
The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \nabla F(x) = 0} , i.e.Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{A}\textbf{x}-\textbf{b} = \textbf{0}}

The conjugate gradient method is often implemented as an iterative algorithm and can be considered as being between Newton’s method, a second-order method that incorporates Hessian and gradient, and the method of steepest descent, a first-order method that uses gradient [1]. Newton’s Method usually reduces the number of iterations needed, but the calculation of the Hessian matrix and its inverse increases the computation required for each iteration. Steepest descent takes repeated steps in the opposite direction of the gradient of the function at the current point. It often takes steps in the same direction as earlier ones, resulting in slow convergence (Figure 1). To avoid the high computational cost of Newton’s method and to accelerate the convergence rate of steepest descent, the conjugate gradient method was developed.

The idea of the CG method is to pick n orthogonal search directions first and, in each search direction, take exactly one step such that the step size is to the proposed solution x at that direction. The solution is reached after n steps as, theoretically, the number of iterations needed by the CG method is equal to the number of different eigenvalues of A, i.e. at most n. This makes it attractive for large and sparse problems. The method can be used to solve least-squares problems and can also be generalized to a minimization method for general smooth functions[2].

Theory

The definition of A-conjugate direction

Let A be a symmetric positive definite matrix. Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\},\textbf{d}_{i} \in\textbf{R}^{n},\textbf{d}_{i} \neq '''0'''} are the vectors that orthogonal (conjugate) to each other with respect to A if
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{d}_{i}^{T}\textbf{A}\textbf{d}_j = 0,\forall i\neq j} .

Note that if A = 0, any two vectors will be conjugated to each other. If A = I, conjugacy is equivalent to the conventional notion of orthogonality. If Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}} are A-conjugated to each other, then the set of vectors Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}} are linearly independent.

The motivation of A-conjugacy

As Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle D=\left\{ \textbf{d}_{0},\textbf{d}_{1},..., \textbf{d}_{n-1}\right\}} is a set of n A-conjugate vectors, then Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle D} can be used as a basis and express the solution x* to Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \nabla F(\textbf{x}) = \textbf{Ax} - \textbf{b} = \textbf{0}} is:
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{x}^{*} = \sum_{i=0}^{n-1}\alpha _{i} \textbf{d}_{i}}
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{A}\textbf{x}^{*} = \sum_{i=0}^{n-1}\alpha _{i} \textbf{A}\textbf{d}_{i}}
Then multiplying dKT on both sides
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{d}_{k}^{T}\textbf{A}\textbf{x}^{*} = \sum_{i=0}^{n-1}\alpha _{i} \textbf{d}_{k}^{T}\textbf{A}\textbf{d}_{i}}
Because Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{Ax} = \textbf{b}} and the A-conjugacy of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle D} , i.e. Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{d}_{k}^{T}\textbf{A}\textbf{d}_i = 0, \forall k \neq i} , the multiplication will cancel out all the terms except for term k
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{d}_{k}^{T}\textbf{b} = \alpha _{k} \textbf{d}_{k}^{T}\textbf{A}\textbf{d}_{k}}
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \alpha_{k}=\frac{\textbf{d}_{k}^{T}\textbf{b}}{\textbf{d}_{k}^{T}\textbf{A}\textbf{d}_{k}}}

Then the solution x* will be Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{x}^{*}= \sum_{i=0}^{n-1}\alpha _{i}\textbf{d}_{i}=\sum_{i=0}^{n-1}\frac{\textbf{d}_{i}^{T}\textbf{b}}{\textbf{d}_{i}^{T}\textbf{A}\textbf{d}_{i}}\textbf{d}_{i}} Because A is a symmetric and positive-definite matrix, so the term Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \textbf{d}_{i}^{T}\textbf{A}\textbf{d}_{i}} defines an inner product and, therefore, no need to calculate the inversion of matrix A.

The conjugate gradient method

numerical example

Application

Conclusion

Reference

Jonathan Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” 1994.

  1. “Conjugate gradient method,” Wikipedia. Nov. 25, 2021. Accessed: Nov. 26, 2021. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Conjugate_gradient_method&oldid=1057033318
  2. W. Stuetzle, “The Conjugate Gradient Method.” 2001. [Online]. Available: https://sites.stat.washington.edu/wxs/Stat538-w03/conjugate-gradients.pdf