Conjugate gradient methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 4: Line 4:
The conjugate gradient method (CG) was originally invented to minimize a quadratic function:<br>
The conjugate gradient method (CG) was originally invented to minimize a quadratic function:<br>
<math>F(\textbf{x})=\frac{1}{2}\textbf{x}^{T}\textbf{A}\textbf{x}-\textbf{b}\textbf{x}</math><br>
<math>F(\textbf{x})=\frac{1}{2}\textbf{x}^{T}\textbf{A}\textbf{x}-\textbf{b}\textbf{x}</math><br>
where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors. The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when ∇F(x) = 0 <br>
where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors.<br>
The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when <math>\nabla F(x) = 0</math> <br>
<math>\textbf{A}\textbf{x}-\textbf{b} = \textbf{0}</math>
<math>\textbf{A}\textbf{x}-\textbf{b} = \textbf{0}</math>



Revision as of 00:55, 28 November 2021

Author: Alexandra Roberts, Anye Shi, Yue Sun (SYSEN6800 Fall 2021)

Introduction

The conjugate gradient method (CG) was originally invented to minimize a quadratic function:

where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors.
The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when

Theory

The conjugate gradient method

numerical example

Application

Conclusion

Reference