Duality

Author: Claire Gauthier, Trent Melsheimer, Alexa Piper, Nicholas Chung, Michael Kulbacki (SysEn 6800 Fall 2020)

Steward: TA's name, Fengqi You

Introduction

Every optimization problem may be viewed either from the primal or the dual, this is the principle of duality. Duality develops the relationships between one optimization problem and another related optimization problem. If the primal optimization problem is a maximization problem, the dual can be used to find upper bounds on its optimal value. If the primal problem is a minimization problem, the dual can be used to find the lower bounds.

According to the American mathematical scientist George Dantzig, Duality for Linear Optimization was conjectured by Jon von Neumann after Dantzig presented a problem for Linear Optimization. Von Neumann determined that two-person zero sum matrix game (from Game Theory) was equivalent to Linear Programming. Proofs of the Duality theory were published by Canadian Mathematician Albert W. Tucker and his group in 1948. [1]

Theory, methodology, and/or algorithmic discussions

Definition:

Primal

Maximize ${\displaystyle z=\textstyle \sum _{j=1}^{n}\displaystyle c_{j}x_{j}}$

subject to:

${\displaystyle \textstyle \sum _{j=1}^{n}\displaystyle a_{i,j}x_{j}\lneq b_{j}\qquad (i=1,2,...,m)}$

${\displaystyle x_{j}\gneq 0\qquad (j=1,2,...,n)}$

Dual

Minimize ${\displaystyle v=\textstyle \sum _{i=1}^{m}\displaystyle b_{i}y_{i}}$

subject to:

${\displaystyle \textstyle \sum _{i=1}^{m}\displaystyle a_{i,j}y_{i}\gneq c_{j}\qquad (j=1,2,...,n)}$

${\displaystyle y_{i}\gneq 0\qquad (i=1,2,...,m)}$

Between the primal and the dual, the variables ${\displaystyle c}$ and ${\displaystyle b}$ switch places with each other. The coefficient (${\displaystyle c_{j}}$) of the primal becomes the right-hand side (RHS) of the dual. The RHS of the primal (${\displaystyle b_{j}}$) becomes the coefficient of the dual. The less than or equal to constraints in the primal become greater than or equal to in the dual problem.

Constructing a Dual:

${\displaystyle {\begin{matrix}\max(c^{T}x)\\\ s.t.Ax\leq b\\x\geq 0\end{matrix}}}$ ${\displaystyle \quad \longrightarrow \quad }$${\displaystyle {\begin{matrix}\\\min(b^{T}y)\\\ s.t.A^{T}x\geq c\\y\geq 0\end{matrix}}}$

Duality Properties:

Weak Duality

• Let ${\displaystyle x=[x_{1},...,x_{n}]}$ be any feasible solution to the primal
• Let ${\displaystyle y=[y_{1},...,y_{m}]}$be any feasible solution to the dual
• ${\displaystyle \therefore }$(z value for x) ${\displaystyle \leq }$(v value for y)

The weak duality theorem says that the z value for x in the primal is always less than or equal to the v value of y in the dual.

The difference between (v value for y) and (z value for x) is called the optimal duality gap, which is always nonnegative.

Strong Duality Lemma

• Let ${\displaystyle x=[x_{1},...,x_{n}]}$ be any feasible solution to the primal
• Let ${\displaystyle y=[y_{1},...,y_{m}]}$be any feasible solution to the dual
• If (z value for x) ${\displaystyle =}$ (v value for y), then x is optimal for the primal and y is optimal for the dual

Graphical Explanation

Essentially, as you choose values of x or y that come closer to the optimal solution, the value of z for the primal, and v for the dual will converge towards the optimal solution. On a number line, the value of z which is being maximized will approach from the left side of the optimum value while the value of v which is being minimized will approach from the right-hand side.

• If the primal is unbounded, then the dual is infeasible
• If the dual is unbounded, then the primal is infeasible

Strong Duality Theorem

If the primal solution has an optimal solution ${\displaystyle x^{*}}$ then the dual problem has an optimal solution ${\displaystyle y^{*}}$ such that

${\displaystyle \textstyle \sum _{j=1}^{n}\displaystyle c_{j}x_{j}^{*}=\textstyle \sum _{i=1}^{m}\displaystyle b_{i}y_{i}^{*}}$

Dual problems and their solutions are used in connection with the following optimization topics:

Karush-Kuhn-Tucker (KKT) Variables

• The optimal solution to the dual problem is a vector of the KKT multipliers. Consider we have a convex optimization problem where ${\displaystyle f(x),g_{1}(x),...,g_{m}(x)}$ are convex differentiable functions. Suppose the pair ${\displaystyle ({\bar {x}},{\bar {u}})}$ is a saddlepoint of the Lagrangian and that ${\displaystyle {\bar {x}}}$ together with ${\displaystyle {\bar {u}}}$ satisfy the KKT conditions. The optimal solutions of this optimization problem are then ${\displaystyle {\bar {x}}}$ and ${\displaystyle {\bar {u}}}$ with no duality gap. [2]
• To have strong duality as described above, you must meet the KKT conditions.

Dual Simplex Method

• Solving a Linear Programming problem by the Simplex Method gives you a solution of its dual as a by-product. This simplex algorithm tries to reduce the infeasibility of the dual problem. The dual simplex method can be thought of as a disguised simplex method working on the dual. The dual simplex method is when we maintain dual feasibility by imposing the condition that the objective function includes every variable with a nonpositive coefficient, and terminating when the primal feasibility conditions are satisfied. [3]

Numerical Example

Construct the Dual for the following maximization problem:

maximize ${\displaystyle z=6x_{1}+14x_{2}+13x_{3}}$

subject to:

${\displaystyle {\tfrac {1}{2}}x_{1}+2x_{2}+x_{3}\leq 24}$

${\displaystyle x_{1}+2x_{2}+4x_{3}\leq 60}$

For the problem above, form augmented matrix A. The first two rows represent constraints one and two respectively. The last row represents the objective function.

${\displaystyle A={\begin{bmatrix}{\tfrac {1}{2}}&2&1&24\\1&2&4&60\\6&14&13&1\end{bmatrix}}}$

Find the transpose of matrix A

${\displaystyle A^{T}={\begin{bmatrix}{\tfrac {1}{2}}&1&6\\2&2&14\\1&4&13\\24&60&1\end{bmatrix}}}$

From the last row of the transpose of matrix A, we can derive the objective function of the dual. Each of the preceding rows represents a constraint. Note that the original maximization problem had three variables and two constraints. The dual problem has two variables and three constraints.

minimize ${\displaystyle z=24y-1+60y_{2}}$

subject to:

${\displaystyle {\tfrac {1}{2}}y_{1}+y_{2}\geq 6}$

${\displaystyle 2y_{1}+2y_{2}\geq 14}$

${\displaystyle y_{1}+4y_{2}\geq 13}$

Applications

Duality appears in many linear and nonlinear optimization models. In many of these applications we can solve the dual in cases when solving the primal is more difficult. If for example, there are more constraints than there are variables (m >> n), it may be easier to solve the dual. A few of these applications are presented and described in more detail below. [4]

Economics

• When calculating optimal product to yield the highest profit, duality can be used. For instance, the primal could be to maximize the profit, but by taking the dual the problem can be reframed into minimizing the cost. By transitioning the problem to set the raw material prices one can determine the price that the owner is willing to accept for the raw material. These dual variables are related to the values of resources available, and are often referred to as resource shadow prices. [add reference here]

Structural Design

• An example of this is in a structural design model, the tension on the beams are the primal variables, and the displacements on the nodes are the dual variables. [1]

Electrical Networks

• When modeling electrical networks the current flows can be modeled as the primal variables, and the voltage differences are the dual variables. [1]

Game Theory

• Duality theory is closely related to game theory. Game theory is an approach used to deal with multi-person decision problems. The game is the decision-making problem, and the players are the decision-makers. Each player chooses a strategy or an action to be taken. Each player will then receive a payoff when each player has selected a strategy. The zero sum game that Von Neumann conjectured was the same as linear programming, is when the gain of one player results in the loss of another. This general situation of a zero sum game has similar characteristics to duality. [7]

Support Vector Machines

• Support Vector Machines (SVM) is a popular machine learning algorithm for classification. The concept of SVM can be broken down into three parts, the first two being Linear SVM and the last being Non-Linear SVM. There are many other concepts to SVM including hyperplanes, functional and geometric margins, and quadratic programming [11]. In relation to Duality, the primal problem is helpful in solving Linear SVM, but in order to get to the goal of solving Non-Linear SVM, the primal problem is not useful. This is where we need Duality to look at the dual problem to solve the Non-Linear SVM [12].

Conclusion

The theory of Duality has brought another viewpoint to every linear and nonlinear programming optimization problem since 1948. (6) This technique can be applied to situations such as solving for economic constraints, resource allocation and bounding optimization problems. By developing an understanding of the dual of a linear program one can gain many important insights on nearly any algorithm or optimization of data.

References

1. Freund, Robert M. (2004, March). Duality Theory of Constrained Optimization. Massachusetts Institute of Technology. https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec18_duality_thy.pdf
2. Bradley, Hax, and Magnanti. (1977). Applied Mathematical Programming. Addison-Wesley. http://web.mit.edu/15.053/www/AMP-Chapter-04.pdf
3. Chvatal, Vasek. (1977). The Dual Simplex Method. W.H. Freeman and Co. http://cgm.cs.mcgill.ca/~avis/courses/567/notes/ch10.pdf
4. Professor You Lecture Slides (Linear Programming, Duality)
5. KKT Conditions and Duality. (2018, February 18). Dartmouth College. https://math.dartmouth.edu/~m126w18/pdf/part4.pdf
6. Duality (Optimization). (2020, July 12). In Wikipedia. https://en.wikipedia.org/wiki/Duality_(optimization)#:~:text=In%20mathematical%20optimization%20theory%2C%20duality,the%20primal%20(minimization)%20problem.
7. Stolee, Derrick. (2013). Game Theory and Duality. University of Illinois at Urbana-Champaigna. https://faculty.math.illinois.edu/~stolee/Teaching/13-482/gametheory.pdf
8. Von Neumann, John. (2014). John Von Neumann Biography. Britannica.
9. Alaouze, C.M. (1996). Shadow Prices in Linear Programming Problems. New South Wales - School of Economics. https://ideas.repec.org/p/fth/nesowa/96-18.html#:~:text=In%20linear%20programming%20problems%20the,is%20increased%20by%20one%20unit.
10. Ella, Cathy, McGraph, Nooz, Stan and Tom. (2013, February 26). Sensitivity Analysis and Shadow Prices. Massachusetts Institute of Technology. https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/lecture-notes/MIT15_053S13_lec6.pdf
11. Jana, Abhisek. (2020, April). Support Vector Machines for Beginners - Linear SVM. http://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-linear-svm/
12. Jana, Abhisek. (2020, April). Support Vector Machines for Beginners - Duality Problem. https://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-duality-problem/