Line search methods: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 32: Line 32:
<math>\lim_{k\to\infty} ||\nabla f(x_{k})|| = 0</math>
<math>\lim_{k\to\infty} ||\nabla f(x_{k})|| = 0</math>


It can be shown from '''''Zoutendijk's theorem''''' that if the line search algorithm satisfies (weak) Wolfe's conditions (similar results also hold for strong Wolfe and Goldstein conditions) and has a search direction that makes an angle with the steepest descent direction that is bounded away from 90°, the algorithm is globally convergent.  
It can be shown from ''Zoutendijk's theorem'' that if the line search algorithm satisfies (weak) Wolfe's conditions (similar results also hold for strong Wolfe and Goldstein conditions) and has a search direction that makes an angle with the steepest descent direction that is bounded away from 90°, the algorithm is globally convergent.  
However, the Zoutendijk condition doesn't guarantee convergence to a local minimum but only stationary points. Hence, additional conditions on the search direction is necessary, such as finding a direction of negative curvature, to prevent the iteration from converging to a nonminimizing stationary point.
However, the Zoutendijk condition doesn't guarantee convergence to a local minimum but only stationary points. Hence, additional conditions on the search direction is necessary, such as finding a direction of negative curvature, to prevent the iteration from converging to a nonminimizing stationary point.
The ''Zoutendijk's theorem'' states that, given an iteration where <math>p_k</math> is the descent direction and <math>\alpha_k</math> is the step length that satisfies (weak) Wolfe conditions, if the objective <math>f</math> is bounded below in <math>\mathbb{R}^{n}</math> and is continuously differentiable in an open set <math>\mathcal{N}</math> containing the level set <math>\mathcal{L}:=\{x\ |\ f(x)\leq f(x_0)\}</math> where <math>x_0</math> is the starting point of the iteration, and the gradient <math>\nabla f</math> is Lipschitz continuous on <math>\mathcal{N}</math>, then
<math>\sum_{k=0}^{\infty}\cos^{2}\theta_{k}||\nabla f_{k}||^2 < \infty</math>
where <math>\theta_{k}</math> is the angle between <math>p_k</math> and <math>-\nabla f(x_{k})</math> the steepest descent direction.


==Exact Search==
==Exact Search==

Revision as of 12:24, 28 November 2021

Authors: Lihe Cao, Zhengyi Sui, Jiaqi Zhang, Yuqing Yan, and Yuhui Gu (6800 Fall 2021).

Introduction

When solving unconstrained optimization problems, the user need to supply a starting point for all algorithms. With the initial starting point, , optimization algorithms generate a sequence of iterates which terminates when an approximated solution has been achieved or no more progress can be made. Line Search is one of the two fundamental strategies for locating the new given the current point.

Generic Line Search Method

Basic Algorithm

  • Pick an initial iterate point
  • Do the following steps until is converged:
    • Choose a descent direction from , which is defined as if , then
    • Calculate a decent step length so that
    • Set

Search Direction for Line Search

The direction of the line search should be chosen to make decrease moving from point to . The most obvious direction is the because it is the one to make decreases most rapidly. We can verify the claim by Taylor's theorem:

where

The rate of change in along the direction at is the coefficient of . Therefore, the unit direction of most rapid decrease is the solution to

subject to .

is the solution and this direction is orthogonal to the contours of the function. In the following sections, we will use this as the default direction of the line search.

Step Length

The step length is a non-negative value such that . When choosing the step length , we need to trade off between giving a substantial reduction of and not spending too much time finding the solution.If is too large, then the step will overshoot, while if the step length is too small, it is time consuming to find the convergent point. We have exact line search and inexact line search to find the value of and more detail about these approaches will be introduced in the next section.

Convergence

For a line search algorithm to be reliable, it should be globally convergent, i.e., the gradient norms, , should converge to zero with each iteration:

It can be shown from Zoutendijk's theorem that if the line search algorithm satisfies (weak) Wolfe's conditions (similar results also hold for strong Wolfe and Goldstein conditions) and has a search direction that makes an angle with the steepest descent direction that is bounded away from 90°, the algorithm is globally convergent. However, the Zoutendijk condition doesn't guarantee convergence to a local minimum but only stationary points. Hence, additional conditions on the search direction is necessary, such as finding a direction of negative curvature, to prevent the iteration from converging to a nonminimizing stationary point.

The Zoutendijk's theorem states that, given an iteration where is the descent direction and is the step length that satisfies (weak) Wolfe conditions, if the objective is bounded below in and is continuously differentiable in an open set containing the level set where is the starting point of the iteration, and the gradient is Lipschitz continuous on , then

where is the angle between and the steepest descent direction.

Exact Search

Steepest Descent Method

Given the intuition that the negative gradient can be an effective search direction, steepest descent follows the idea and establishes a systematic method for minimizing the objective function. Setting as the direction, steepest descent computes the step-length by minimizing a single-variable objective function. More specifically, the steps of Steepest Descent Method are as follows.

Steepest Descent Algorithm

Set a starting point

Set a convergence rate

Set

Set the maximum iteration

While :

If :
Break
EndIf

Return ,

One advantage of the steepest descent method is that it has a nice convergence theory. For a steepest descent method, it converges to a local minimal from any starting point.

Theorem: global convergence of steepest descent[1]

Let the gradient of be uniformly Lipschitz continuous on . Then, for the iterates with steepest-descent search directions, one of the following situations occurs:

  • for some finite

Steepest descent method is a special case of gradient descent in that the step-length is rigorously defined. Generalization can be made regarding the choice of .

Inexact Search

Wolfe Conditions

Goldstein Conditions

Backtracking

Numeric Example

Applications

Reference

  1. Dr Raphael Hauser, Oxford University Computing Laboratory, Line Search Methods for Unconstrained Optimization [1]