Frank-Wolfe

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Revision as of 01:51, 27 November 2021 by Alt65 (talk | contribs)
Jump to navigation Jump to search

Author: Adeline Tse, Jeff Cheung, Tucker Barrett (SYSEN 5800 Fall 2021)

Introduction

The Frank-Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization, first proposed by Marguerite Frank and Philip Wolfe from Princeton University in 1956 [1 (Frank-Wolfe Paper)]. It is also known as the “gradient and interpolation” algorithm or sometimes the “conditional gradient method” as it utilizes a simplex method of comparing an initial feasible point with a secondary basic feasible point to maximize the linear constraints in each iteration to find or approximate the optimal solution.

Major advantages of the Frank-Wolfe algorithm include that it is simple to implement; it results in a projection-free computation (that is, it does not require projections back to the constraint set to ensure feasibility); and it generates solution iterations that are sparse with respect to the constraint set. These features favor linear minimization, simple implementation, and low complexity. It is utilized in artificial intelligence, machine learning, traffic assignment, signal processing, and many more applications.