Line search methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 22: Line 22:
One advantage of the steepest descent method is that it has a nice convergence theory. For a steepest descent method, it converges to a local minimal from any starting point.  
One advantage of the steepest descent method is that it has a nice convergence theory. For a steepest descent method, it converges to a local minimal from any starting point.  


'''Theorem: global convergence of steepest descent'''<ref>[https://people.maths.ox.ac.uk/hauser/hauser_lecture2.pdf]</ref>
'''Theorem: global convergence of steepest descent'''<ref>Dr Raphael Hauser, Oxford University Computing Laboratory, Line Search Methods for Unconstrained Optimization [https://people.maths.ox.ac.uk/hauser/hauser_lecture2.pdf]</ref>
Let the gradient of <math>f \in C^1</math> be uniformly Lipschitz continuous on <math>R^n</math>. Then, for the iterates with steepest-descent search directions, one of the following situations occurs:
Let the gradient of <math>f \in C^1</math> be uniformly Lipschitz continuous on <math>R^n</math>. Then, for the iterates with steepest-descent search directions, one of the following situations occurs:
* <math>\nabla f(x_k) = 0</math> for some finite <math>k</math>
* <math>\nabla f(x_k) = 0</math> for some finite <math>k</math>

Revision as of 11:21, 28 November 2021

Authors: Lihe Cao, Zhengyi Sui, Jiaqi Zhang, Yuqing Yan, and Yuhui Gu (6800 Fall 2021).

Introduction

Generic Line Search Method

Basic Algorithm

Search Direction for Line Search

Step Length

Convergence

Exact Search

Steepest Descent Method

Given the intuition that the negative gradient can be an effective search direction, steepest descent follows the idea and establishes a systematic method for minimizing the objective function. Setting as the direction, steepest descent computes the step-length by minimizing a single-variable objective function. More specifically, the steps of Steepest Descent Method are as follows.

PSEUDOCODE HERE

One advantage of the steepest descent method is that it has a nice convergence theory. For a steepest descent method, it converges to a local minimal from any starting point.

Theorem: global convergence of steepest descent[1] Let the gradient of be uniformly Lipschitz continuous on . Then, for the iterates with steepest-descent search directions, one of the following situations occurs:

  • for some finite

Steepest descent method is a special case of gradient descent in that the step-length is rigorously defined. Generalization can be made regarding the choice of .

Inexact Search

Backtracking

Zoutendijk’s Theorem

Numeric Example

Reference

  1. Dr Raphael Hauser, Oxford University Computing Laboratory, Line Search Methods for Unconstrained Optimization [1]