Duality

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Author: Claire Gauthier, Trent Melsheimer, Alexa Piper, Nicholas Chung, Michael Kulbacki (SysEn 6800 Fall 2020)

Steward: TA's name, Fengqi You

Introduction

Every optimization problem may be viewed either from the primal or the dual, this is the principle of duality. Duality develops the relationships between one optimization problem and another related optimization problem. If the primal optimization problem is a maximization problem, the dual can be used to find upper bounds on its optimal value. If the primal problem is a minimization problem, the dual can be used to find the lower bounds.

According to the American mathematical scientist George Dantzig, Duality for Linear Optimization was conjectured by Jon von Neumann after Dantzig presented a problem for Linear Optimization. Von Neumann determined that two-person zero sum matrix game (from Game Theory) was equivalent to Linear Programming. Proofs of the Duality theory were published by Canadian Mathematician Albert W. Tucker and his group in 1948. [1]

Theory, methodology, and/or algorithmic discussions

Definition:

Primal

Maximize

subject to:


Dual

Minimize

subject to:

Between the primal and the dual, the variables and switch places with each other. The coefficient () of the primal becomes the right-hand side (RHS) of the dual. The RHS of the primal () becomes the coefficient of the dual. The less than or equal to constraints in the primal become greater than or equal to in the dual problem.

Constructing a Dual:

Duality Properties:

Weak Duality

  • Let be any feasible solution to the primal
  • Let be any feasible solution to the dual
  • (z value for x) (v value for y)

The weak duality theorem says that the z value for x in the primal is always less than or equal to the v value of y in the dual.

The difference between (v value for y) and (z value for x) is called the optimal duality gap, which is always nonnegative. [2]

Strong Duality Lemma

  • Let be any feasible solution to the primal
  • Let be any feasible solution to the dual
  • If (z value for x) (v value for y), then x is optimal for the primal and y is optimal for the dual

Graphical Explanation

Essentially, as you choose values of x or y that come closer to the optimal solution, the value of z for the primal, and v for the dual will converge towards the optimal solution. On a number line, the value of z which is being maximized will approach from the left side of the optimum value while the value of v which is being minimized will approach from the right-hand side.

  • If the primal is unbounded, then the dual is infeasible
  • If the dual is unbounded, then the primal is infeasible

Strong Duality Theorem

If the primal solution has an optimal solution then the dual problem has an optimal solution such that

Dual problems and their solutions are used in connection with the following optimization topics:

Karush-Kuhn-Tucker (KKT) Variables

  • The optimal solution to the dual problem is a vector of the KKT multipliers. Consider we have a convex optimization problem where are convex differentiable functions. Suppose the pair is a saddlepoint of the Lagrangian and that together with satisfy the KKT conditions. The optimal solutions of this optimization problem are then and with no duality gap. [3]
  • To have strong duality as described above, you must meet the KKT conditions.

Dual Simplex Method

  • Solving a Linear Programming problem by the Simplex Method gives you a solution of its dual as a by-product. This simplex algorithm tries to reduce the infeasibility of the dual problem. The dual simplex method can be thought of as a disguised simplex method working on the dual. The dual simplex method is when we maintain dual feasibility by imposing the condition that the objective function includes every variable with a nonpositive coefficient, and terminating when the primal feasibility conditions are satisfied. [4]

Numerical Example

Construct the Dual for the following maximization problem:

maximize

subject to:

For the problem above, form augmented matrix A. The first two rows represent constraints one and two respectively. The last row represents the objective function.

Find the transpose of matrix A

From the last row of the transpose of matrix A, we can derive the objective function of the dual. Each of the preceding rows represents a constraint. Note that the original maximization problem had three variables and two constraints. The dual problem has two variables and three constraints.

minimize

subject to:

Applications

Duality appears in many linear and nonlinear optimization models. In many of these applications we can solve the dual in cases when solving the primal is more difficult. If for example, there are more constraints than there are variables (m >> n), it may be easier to solve the dual. A few of these applications are presented and described in more detail below. [5]

Economics

  • When calculating optimal product to yield the highest profit, duality can be used. For instance, the primal could be to maximize the profit, but by taking the dual the problem can be reframed into minimizing the cost. By transitioning the problem to set the raw material prices one can determine the price that the owner is willing to accept for the raw material. These dual variables are related to the values of resources available, and are often referred to as resource shadow prices. [6]

Structural Design

  • An example of this is in a structural design model, the tension on the beams are the primal variables, and the displacements on the nodes are the dual variables. [add reference here]

Electrical Networks

  • When modeling electrical networks the current flows can be modeled as the primal variables, and the voltage differences are the dual variables. [7]

Game Theory

  • Duality theory is closely related to game theory. Game theory is an approach used to deal with multi-person decision problems. The game is the decision-making problem, and the players are the decision-makers. Each player chooses a strategy or an action to be taken. Each player will then receive a payoff when each player has selected a strategy. The zero sum game that Von Neumann conjectured was the same as linear programming, is when the gain of one player results in the loss of another. This general situation of a zero sum game has similar characteristics to duality. [8]

Support Vector Machines

  • Support Vector Machines (SVM) is a popular machine learning algorithm for classification. The concept of SVM can be broken down into three parts, the first two being Linear SVM and the last being Non-Linear SVM. There are many other concepts to SVM including hyperplanes, functional and geometric margins, and quadratic programming [9]. In relation to Duality, the primal problem is helpful in solving Linear SVM, but in order to get to the goal of solving Non-Linear SVM, the primal problem is not useful. This is where we need Duality to look at the dual problem to solve the Non-Linear SVM [10]

Conclusion

The theory of Duality has brought another viewpoint to every linear and nonlinear programming optimization problem since 1948. (6) This technique can be applied to situations such as solving for economic constraints, resource allocation and bounding optimization problems. By developing an understanding of the dual of a linear program one can gain many important insights on nearly any algorithm or optimization of data.

References

  1. Duality (Optimization). (2020, July 12). In Wikipedia. https://en.wikipedia.org/wiki/Duality_(optimization)#:~:text=In%20mathematical%20optimization%20theory%2C%20duality,the%20primal%20(minimization)%20problem.
  2. Bradley, Hax, and Magnanti. (1977). Applied Mathematical Programming. Addison-Wesley. http://web.mit.edu/15.053/www/AMP-Chapter-04.pdf
  3. KKT Conditions and Duality. (2018, February 18). Dartmouth College. https://math.dartmouth.edu/~m126w18/pdf/part4.pdf
  4. Chvatal, Vasek. (1977). The Dual Simplex Method. W.H. Freeman and Co. http://cgm.cs.mcgill.ca/~avis/courses/567/notes/ch10.pdf
  5. Professor You Lecture Slides (Linear Programming, Duality)
  6. Alaouze, C.M. (1996). Shadow Prices in Linear Programming Problems. New South Wales - School of Economics. https://ideas.repec.org/p/fth/nesowa/96-18.html#:~:text=In%20linear%20programming%20problems%20the,is%20increased%20by%20one%20unit.
  7. Freund, Robert M. (2004, March). Duality Theory of Constrained Optimization. Massachusetts Institute of Technology. https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec18_duality_thy.pdf
  8. Stolee, Derrick. (2013). Game Theory and Duality. University of Illinois at Urbana-Champaigna. https://faculty.math.illinois.edu/~stolee/Teaching/13-482/gametheory.pdf
  9. Jana, Abhisek. (2020, April). Support Vector Machines for Beginners - Linear SVM. http://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-linear-svm/
  10. Jana, Abhisek. (2020, April). Support Vector Machines for Beginners - Duality Problem. https://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-duality-problem/.