Line search methods: Difference between revisions
Line 112: | Line 112: | ||
<math>(1) \quad \textbf{Armijo condition}</math> | <math>(1) \quad \textbf{Armijo condition}</math> | ||
<math>f(x_k + \alpha p_k) \leq f(x_k) + c_1 \alpha_{k} | <math>f(x_k + \alpha p_k) \leq f(x_k) + c_1 \alpha_{k} p^\top_k \nabla{f(x_k)}</math> | ||
where <math>c_1</math> is between 0 and 1 and is often chosen to be of a small order of magnitude around <math>10^{-4}</math>. This condition ensures the computed step length can reduces the objective function <math>f(x_k)</math> sufficiently. Only using this condition, we can't guarantee <math>x_k</math> to converge, since Armijo condition is always satisfied with step length that is small enough. Therefore, we need to pair it with the second condition below, in order to keep <math>\alpha_k</math> from being too short. | where <math>c_1</math> is between 0 and 1 and is often chosen to be of a small order of magnitude around <math>10^{-4}</math>. This condition ensures the computed step length can reduces the objective function <math>f(x_k)</math> sufficiently. Only using this condition, we can't guarantee <math>x_k</math> to converge, since Armijo condition is always satisfied with step length that is small enough. Therefore, we need to pair it with the second condition below, in order to keep <math>\alpha_k</math> from being too short. | ||
<math>(2) \quad \textbf{Curvature condition}</math> | <math>(2) \quad \textbf{Curvature condition}</math> | ||
<math>-p_k \nabla{f(x_k + \alpha | <math>-p_k \nabla{f(x_k + \alpha p_k) \leq c_2 p^\top_k f(x_k))}</math> | ||
<math>c_2</math> is much greater than <math>c_1</math> and is often chosen in the magnitude of 0.1. This condition ensures the slope to decrease sufficiently. | <math>c_2</math> is much greater than <math>c_1</math> and is often chosen in the magnitude of 0.1. This condition ensures the slope to decrease sufficiently. | ||
Line 122: | Line 122: | ||
The Wolfe condition can result in a value that is not close to the minimizer of <math>\phi{\alpha}</math>. We can modify the Wolfe condition by using the following condition called Strong Wolfe condition to replace the curvature condition in <math>(2)</math>. | The Wolfe condition can result in a value that is not close to the minimizer of <math>\phi{\alpha}</math>. We can modify the Wolfe condition by using the following condition called Strong Wolfe condition to replace the curvature condition in <math>(2)</math>. | ||
<math>|p_k \nabla{f(x_k + \alpha | <math>|p_k \nabla{f(x_k + \alpha p_k)| \leq c_2 |p^\top_k f(x_k))}|</math> | ||
This left hand side of the strong curvature condition is simply the derivative of the | This left hand side of the strong curvature condition is simply the derivative of the <math>\phi(\alpha)</math>, thus can ensure <math>\alpha_k</math> to lie close to a critical point of <math>\phi(\alpha)</math>. | ||
===Goldstein Conditions=== | ===Goldstein Conditions=== |
Revision as of 15:31, 28 November 2021
Authors: Lihe Cao, Zhengyi Sui, Jiaqi Zhang, Yuqing Yan, and Yuhui Gu (6800 Fall 2021).
Introduction
When solving unconstrained optimization problems, the user need to supply a starting point for all algorithms. With the initial starting point, , optimization algorithms generate a sequence of iterates which terminates when an approximated solution has been achieved or no more progress can be made. Line Search is one of the two fundamental strategies for locating the new given the current point.
Generic Line Search Method
Basic Algorithm
- Pick an initial iterate point
- Repeat the following steps until converge:
- Choose a descent direction starting at , defined as if , then
- Calculate a decent step length so that
- Set
Search Direction for Line Search
The direction of the line search should be chosen to make decrease moving from point to . The most obvious direction is the because it is the one to make decreases most rapidly. We can verify the claim by Taylor's theorem:
where
The rate of change in along the direction at is the coefficient of . Therefore, the unit direction of most rapid decrease is the solution to
.
is the solution and this direction is orthogonal to the contours of the function. In the following sections, we will use this as the default direction of the line search.
Step Length
The step length is a non-negative value such that . When choosing the step length , we need to trade off between giving a substantial reduction of and not spending too much time finding the solution. If is too large, then the step will overshoot, while if the step length is too small, it is time consuming to find the convergent point. We have exact line search and inexact line search to find the value of and more detail about these approaches will be introduced in the next section.
Convergence
For a line search algorithm to be reliable, it should be globally convergent, that is the gradient norms, , should converge to zero with each iteration, i.e., .
It can be shown from Zoutendijk's theorem [1] that if the line search algorithm satisfies (weak) Wolfe's conditions (similar results also hold for strong Wolfe and Goldstein conditions) and has a search direction that makes an angle with the steepest descent direction that is bounded away from 90°, the algorithm is globally convergent.
Zoutendijk's theorem states that, given an iteration where is the descent direction and is the step length that satisfies (weak) Wolfe conditions, if the objective is bounded below on and is continuously differentiable in an open set containing the level set , where is the starting point of the iteration, and the gradient is Lipschitz continuous on , then
,
where is the angle between and the steepest descent direction .
The Zoutendijk condition above implies that
,
by the n-th term divergence test. Hence, if the algorithm chooses a search direction that is bounded away from $90^\circ$ relative to the gradient, i.e., given ,
,
it follows that
.
However, the Zoutendijk condition doesn't guarantee convergence to a local minimum but only stationary points. Hence, additional conditions on the search direction is necessary, such as finding a direction of negative curvature, to prevent the iteration from converging to a nonminimizing stationary point.
Exact Search
Steepest Descent Method
Given the intuition that the negative gradient can be an effective search direction, steepest descent follows the idea and establishes a systematic method for minimizing the objective function. Setting as the direction, steepest descent computes the step-length by minimizing a single-variable objective function. More specifically, the steps of Steepest Descent Method are as follows.
Steepest Descent Algorithm
Set a starting point
Set a convergence criterium
Set
Set the maximum iteration
While :
- If :
- Break
- End
End
Return ,
One advantage of the steepest descent method is that it has a nice convergence theory. For a steepest descent method, it converges to a local minimal from any starting point.
Theorem: Global Convergence of Steepest Descent [2]
Let the gradient of be uniformly Lipschitz continuous on . Then, for the iterates with steepest-descent search directions, one of the following situations occurs:
- for some finite
Steepest descent method is a special case of gradient descent in that the step-length is rigorously defined. Generalization can be made regarding the choice of .
Inexact Search
When we minimize the objective function using numeric methods, in each iteration, the updated objective is , a function of when we fix the direction. Our goal is to minimize the objective with respect to . However, sometimes if we want to solve for the exact minimum in each iteration, it might be computationally expensive and the algorithm will be time consuming. Therefore, in practice we just solve the subproblem
approximately and find a suitable step length $\alpha$ instead, which will decrease the objective function. That is, satisfies .A problem is, we can not guarantee the convergence to the function's minimum, so we often apply the following conditions to find an acceptable step length.
Wolfe Conditions
This condition is proposed by Phillip Wolfe in 1969. It provide an efficient way of choosing a step length that decreases the objective function sufficiently. It consists of two conditions, Armijo condition and curvature condition.
where is between 0 and 1 and is often chosen to be of a small order of magnitude around . This condition ensures the computed step length can reduces the objective function sufficiently. Only using this condition, we can't guarantee to converge, since Armijo condition is always satisfied with step length that is small enough. Therefore, we need to pair it with the second condition below, in order to keep from being too short.
is much greater than and is often chosen in the magnitude of 0.1. This condition ensures the slope to decrease sufficiently.
The Wolfe condition can result in a value that is not close to the minimizer of . We can modify the Wolfe condition by using the following condition called Strong Wolfe condition to replace the curvature condition in .
This left hand side of the strong curvature condition is simply the derivative of the , thus can ensure to lie close to a critical point of .
Goldstein Conditions
Another condition to find an appropriate step length is called Goldstein conditions.
Failed to parse (unknown function "\bm"): {\displaystyle f(x_k) + (1-c) \alpha_k \nabla{f^T_k} \bm{p}_k \leq f(x_k + \alpha \bm{p}_k) \leq f(x_k) + c \alpha_k \nabla{f^T_k} \bm{p}_k}
where . The Goldstein condition is quite similar with the Wolfe condition in that, its second inequality ensures that the step length will decrease the objective function sufficiently and its first inequality keep from being too short. In comparison with Wolfe condition, one disadvantage of Goldstein condition is that the first inequality of the condition might exclude all minimizers of function. However, usually it is not a fatal problem as long as the objective decrease in the direction of convergence. As a short conclusion, the Goldstein and Wolfe conditions have quite similar convergence theories. Compared to the Wolfe conditions, the Goldstein conditions are often used in Newton-type methods but are not well suited for quasi-Newton methods that maintain a positive definite Hessian approximation.
Backtracking
The backtracking method is often used to find the appropriate step length and terminate line search. The backtracking method starts with a relatively large initial step length (e.g., 1 for Newton method), then iteratively shrinking it by a contraction factor until the Armijo (sufficient decrease) condition is satisfied. The advantage of this approach is that the curvature condition needs not be considered, and the step length found at each line search iterate is ensured to be short enough to satisfy sufficient decrease but large enough to still allow the algorithm to make reasonable progress towards convergence.
The backtracking algorithm involves control parameters and .