Line search methods

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Authors: Lihe Cao, Zhengyi Sui, Jiaqi Zhang, Yuqing Yan, and Yuhui Gu (6800 Fall 2021).

Introduction

When solving unconstrained optimization problems, the user need to supply a starting point for all algorithms. With the initial starting point, , optimization algorithms generate a sequence of iterates which terminates when an approximated solution has been achieved or no more progress can be made. Line Search is one of the two fundamental strategies for locating the new given the current point.

Generic Line Search Method

Basic Algorithm

Search Direction for Line Search

Step Length

Convergence

Exact Search

Steepest Descent Method

Given the intuition that the negative gradient can be an effective search direction, steepest descent follows the idea and establishes a systematic method for minimizing the objective function. Setting as the direction, steepest descent computes the step-length by minimizing a single-variable objective function. More specifically, the steps of Steepest Descent Method are as follows.

PSEUDOCODE HERE

One advantage of the steepest descent method is that it has a nice convergence theory. For a steepest descent method, it converges to a local minimal from any starting point.

Theorem: global convergence of steepest descent[1] Let the gradient of be uniformly Lipschitz continuous on . Then, for the iterates with steepest-descent search directions, one of the following situations occurs:

  • for some finite

Steepest descent method is a special case of gradient descent in that the step-length is rigorously defined. Generalization can be made regarding the choice of .

Inexact Search

Backtracking

Zoutendijk’s Theorem

Numeric Example

Reference

  1. Dr Raphael Hauser, Oxford University Computing Laboratory, Line Search Methods for Unconstrained Optimization [1]