Conjugate gradient methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
== Introduction ==
== Introduction ==
The conjugate gradient method (CG) was originally invented to minimize a quadratic function:<br>
The conjugate gradient method (CG) was originally invented to minimize a quadratic function:<br>
<math>F(\textbf{x})=\frac{1}{2}\textbf{x}^{T}\textbf{A}\textbf{x}-\textbf{b}\textbf{x}</math>
<math>F(\textbf{x})=\frac{1}{2}\textbf{x}^{T}\textbf{A}\textbf{x}-\textbf{b}\textbf{x}</math><br>
where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors. The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when ∇F(x) = 0 <br>
<math>\textbf{A}\textbf{x}-\textbf{b} = \textbf{0}</math>


== Theory ==
== Theory ==

Revision as of 00:50, 28 November 2021

Author: Alexandra Roberts, Anye Shi, Yue Sun (SYSEN6800 Fall 2021)

Introduction

The conjugate gradient method (CG) was originally invented to minimize a quadratic function:

where A is an n × n symmetric positive definite matrix, x and b are n × 1 vectors. The solution to the minimization problem is equivalent to solving the linear system, i.e. determining x when ∇F(x) = 0

Theory

The conjugate gradient method

numerical example

Application

Conclusion

Reference