Interior-point method for LP: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(44 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Authors: Tomas Lopez Lauterio, Rohit Thakur and Sunil Shenoy  (SysEn 6800 Fall 2020) <br>
Authors: Tomas Lopez Lauterio, Rohit Thakur and Sunil Shenoy  (SysEn 5800 Fall 2020) <br>
Steward: Dr. Fengqi You and Akshay Ajagekar<br>
 


== Introduction ==
== Introduction ==
Linear programming problems seeks to optimize linear functions given linear constraints. There are several applications of linear programming including inventory control, production scheduling, transportation optimization and efficient manufacturing processes. Simplex method has been a very popular method to solve these linear programming problems and has served these industries well for a long time. But over the past 40 years, there have been significant number of advances in different algorithms that can be used for solving these types of problems in more efficient ways, especially where the problems become very large scale in terms of variables and constraints.<ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <ref> "Linear Programming - Foundations and Extensions - 3<sup>rd</sup> edition''" by Robert J Vanderbei, ISBN-113: 978-0-387-74387-5. </ref> In early 1980s Karmarkar (1984) <ref> N Karmarkar, "A new Polynomial - Time algorithm for linear programming", Combinatorica, VOl. 4, No. 8, 1984, pp. 373-395.</ref> published a paper introducing interior point methods to solve linear-programming problems. A simple way to look at differences between simplex method and interior point method is that a simplex method moves along the edges of a polytope towards a vertex having a lower value of the cost function, whereas an interior point method begins its iterations inside the polytope and moves towards the lowest cost vertex without regard for edges. This approach reduces the number of iterations needed to reach that vertex, thereby reducing computational time needed to solve the problem.<br><br>
Linear programming problems seek to optimize linear functions given linear constraints. There are several applications of linear programming including inventory control, production scheduling, transportation optimization and efficient manufacturing processes. Simplex method has been a very popular method to solve these linear programming problems and has served these industries well for a long time. But over the past 40 years, there have been significant number of advances in different algorithms that can be used for solving these types of problems in more efficient ways, especially where the problems become very large scale in terms of variables and constraints.<ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <ref> "Linear Programming - Foundations and Extensions - 3<sup>rd</sup> edition''" by Robert J Vanderbei, ISBN-113: 978-0-387-74387-5. </ref> In early 1980s Karmarkar (1984) <ref> N Karmarkar, "A new Polynomial - Time algorithm for linear programming", Combinatorica, VOl. 4, No. 8, 1984, pp. 373-395.</ref> published a paper introducing interior point methods to solve linear-programming problems. A simple way to look at differences between simplex method and interior point method is that a simplex method moves along the edges of a polytope towards a vertex having a lower value of the cost function, whereas an interior point method begins its iterations inside the polytope and moves towards the lowest cost vertex without regard for edges. This approach reduces the number of iterations needed to reach that vertex, thereby reducing computational time needed to solve the problem.<br><br>


=== Lagrange Function ===
=== Lagrange Function ===
Before getting too deep into description of Interior point method, there are a few concepts that are helpful to understand. First key concept to understand is related to Lagrange function. Lagrange function incorporates the constraints into a modified objective function in such a way that a constrained minimizer (x<sup>*</sup>) is connected to an unconstrained minimizer {x<sup>*</sup>, λ<sup>*</sup>} <math> \left \{x^{*},\lambda ^{*} \right \} </math> for the augmented objective function L(x,λ), where the augmentation is achieved with 'p' Lagrange multipliers. <br>
Before getting too deep into description of Interior point method, there are a few concepts that are helpful to understand. First key concept to understand is related to Lagrange function. Lagrange function incorporates the constraints into a modified objective function in such a way that a constrained minimizer <math> (x^{*}) </math> is connected to an unconstrained minimizer <math> \left \{x^{*},\lambda ^{*} \right \} </math> for the augmented objective function <math> L\left ( x , \lambda  \right ) </math>, where the augmentation is achieved with <math> 'p' </math> Lagrange multipliers. <ref> Computational Experience with Primal-Dual Interior Point Method for Linear Programming''" by Irvin Lustig, Roy Marsten, David Shanno </ref><ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <br>
To illustrate this point, if we consider a simple an optimization problem:
To illustrate this point, consider a simple an optimization problem:<br>
minimize f(x)
minimize <math> f\left ( x \right ) </math><br>
subject to: A·x = b, where A ε R<sup>pxn</sup> is assumed to have a full row rank
subject to: <math> A \cdot x = b </math><br>
where, <math> A \, \in \, R^{p\, \times \, n} </math> is assumed to have a full row rank
Lagrange function can be laid out as:<br>
Lagrange function can be laid out as:<br>
 
<math>L(x, \lambda ) = f(x) + \sum_{i=1}^{p}\lambda _{i}\cdot a_{i}(x)</math> <br>
<math>L(x, \lambda ) = f(x) - \sum_{i=1}^{p}\lambda _{i}\cdot a_{i}(x) </math> <br>
where, <math> '\lambda ' </math> introduced in this equation is called Lagrange Multiplier. <br><br>
where, 'λ' introduced in this equation is called Lagrange Multiplier. <br><br>
=== Newton's Method ===
=== Newton's Method ===
Another key concept to understand is regarding solving linear and non-linear equations using Newton's methods.  
Another key concept to understand is regarding solving linear and non-linear equations using Newton's methods.  
Assume you have an unconstrained minimization problem in the form: <br>
Assume an unconstrained minimization problem in the form: <br>
minimize ''g''(x) , where ''g''(x) is a real valued function with ''n'' variables. <br>
minimize <math> g\left ( x \right ) </math> , where <math> g\left ( x \right ) </math> is a real valued function with <math> 'n' </math> variables. <br>
A local minimum for this problem will satisfy the following system of equations:<br>
A local minimum for this problem will satisfy the following system of equations:<br>
<math>\left [ \frac{\partial g(x)}{\partial x_{1}}  ..... \frac{\partial g(x)}{\partial x_{n}}\right ]^{T} = \left [ 0 ... 0 \right ]</math> <br>
<math>\left [ \frac{\partial g(x)}{\partial x_{1}}  ..... \frac{\partial g(x)}{\partial x_{n}}\right ]^{T} = \left [ 0 ... 0 \right ]</math> <br>
Line 29: Line 27:
[[File:Visualization.png|685x685px|Visualization of Central Path method in Interior point|thumb]]
[[File:Visualization.png|685x685px|Visualization of Central Path method in Interior point|thumb]]


We first start forming a primal-dual pair of linear programs and use the "Lagrangian function" and "Barrier function" methods to convert the constrained problems into unconstrained problems. These unconstrained problems are then solved using Newton's method as shown above.<br>
Given a linear programming problem with constraint equations that have inequality terms, the inequality term is typically replaced with an equality term using slack variables. The new reformulation can be discontinuous in nature and to replace the discontinuous function with a smoother function, a logarithmic form of this reformulation is utilized. This nonlinear objective function is called "''Logarithmic Barrier Function''"
The process involves starting with formation of a primal-dual pair of linear programs and then using "''Lagrangian function''" form on the "''Barrier function''" to convert the constrained problems into unconstrained problems. These unconstrained problems are then solved using Newton's method as shown above.<br>


=== Problem Formulation ===
=== Problem Formulation ===
Line 36: Line 35:
Consider a combination of primal-dual problem below:<br>
Consider a combination of primal-dual problem below:<br>
('''Primal Problem formulation''') <br>
('''Primal Problem formulation''') <br>
→ minimize <math> c^{T}x </math>  Subject to: <math> Ax = b </math>  and  <math> x \geq 0 </math> ....................................................................................................(1)<br>
→ minimize <math> c^{T}x </math>  <br>
Subject to: <math> Ax = b </math>  and  <math> x \geq 0 </math> <br>
('''Dual Problem formulation''') <br>
('''Dual Problem formulation''') <br>
→ maximize <math> b^{T}y </math>  Subject to: <math> A^{T}y + \lambda  = c </math> and <nowiki></math></nowiki> \lambda \geq 0 <nowiki></math></nowiki> .................................................(2)<br>
→ maximize <math> b^{T}y </math>  <br>
'λ' vector introduced represents the slack variables.<br>
Subject to: <math> A^{T}y + \lambda  = c </math> and <math> \lambda \geq 0 </math> <br>
Now we use the "Logarithmic Barrier" function and form 2 Lagrangian equations for primal and dual forms mentioned above:<br>
<math> '\lambda ' </math> vector introduced represents the slack variables.<br>
Lagrange function for Primal : <math> L_{p}(x,y) = c^{T}\cdot x + \mu \cdot \sum_{j=1}^{n}log(x_{j}) - y^{T}\cdot (Ax - b) </math> ........................................(3)<br>
 
Lagrange function for Dual  : <math> L_{d}(x,y,\lambda ) = b^{T}\cdot y + \mu \cdot \sum_{j=1}^{n}log(\lambda _{j}) - x^{T}\cdot (A^{T}y -\lambda - c) </math> ..........................(4)<br>
The Lagrangian functional form is used to configure two equations using "''Logarithmic Barrier Function''" for both primal and dual forms mentioned above:<br>
Lagrangian equation for Primal using Logarithm Barrier Function : <math> L_{p}(x,y) = c^{T}\cdot x - \mu \cdot \sum_{j=1}^{n}log(x_{j}) - y^{T}\cdot (Ax - b) </math> <br>
Lagrangian equation for Dual using Logarithm Barrier Function   : <math> L_{d}(x,y,\lambda ) = b^{T}\cdot y + \mu \cdot \sum_{j=1}^{n}log(\lambda _{j}) - x^{T}\cdot (A^{T}y +\lambda - c) </math> <br>


Taking the partial derivatives of L<sub>p</sub> and L<sub>d</sub> with respect to variables 'x', 'λ' , 'y', and forcing these terms to zero, we get the following equations: <br>
Taking the partial derivatives of L<sub>p</sub> and L<sub>d</sub> with respect to variables <math> 'x'\; '\lambda'\; 'y' </math>, and forcing these terms to zero, we get the following equations: <br>
<math> Ax = b </math>  and  <math> x \geq 0 </math> .................................................................................................................................................(5)<br>
<math> Ax = b </math>  and  <math> x \geq 0 </math> <br>
<math> A^{T}y + \lambda  = c </math> and <math> \lambda \geq 0 </math> .........................................................................................................................................(6)<br>
<math> A^{T}y + \lambda  = c </math> and <math> \lambda \geq 0 </math> <br>
<math> x_{j}\cdot \lambda _{j} = \mu </math> for ''j''= 1,2,.......''n'' ......................................................................................................................................(7)<br>
<math> x_{j}\cdot \lambda _{j} = \mu </math> for ''j''= 1,2,.......''n'' <br>


where, μ is strictly positive scaler parameter. For each μ > 0, the vectors in the set {x(μ), y(μ), λ(μ)} satisfying equations (5), (6) and (7), can we viewed as set of points in R<sup>n</sup>, R<sup>p</sup>, R<sup>n</sup> such that when <big>'μ'</big> varies, the corresponding points form a set of trajectories called ''"Central Path"''. The central path lies in the "Interior" of the feasible regions. There is a sample illustration of ''"Central Path"'' method in figure to right. We start with a positive value of <big>'μ'</big> and as <big>'μ'</big> goes to 0, we approach the optimal point <br>
where, <math> '\mu ' </math> is strictly positive scaler parameter. For each <math> \mu > 0 </math> , the vectors in the set <math> \left \{ x\left ( \mu  \right ), y\left ( \mu  \right ) , \lambda \left ( \mu  \right )\right \} </math> satisfying above equations, can we viewed as set of points in <math> R^{n} </math> , <math> R^{p} </math>, <math> R^{n} </math> respectively, such that when <math> '\mu ' </math> varies, the corresponding points form a set of trajectories called ''"Central Path"''. The central path lies in the ''"Interior"'' of the feasible regions. There is a sample illustration of ''"Central Path"'' method in figure to right. Starting with a positive value of <math> '\mu ' </math> and as <math> '\mu ' </math> approaches 0, the optimal point is reached. <br>


Let us define the following:<br>
Let Diagonal[...] denote a diagonal matrix with the listed elements on its diagonal.
'''X''' = Diagonal (<math> x_{1}^{0}, .... x_{n}^{0} </math>)<br>
Define the following:<br>
'''X''' = Diagonal [<math> x_{1}^{0}, .... x_{n}^{0} </math>]<br>
<math> \lambda </math> = Diagonal (<math> \lambda _{1}^{0}, .... \lambda _{n}^{0} </math> )<br>
<math> \lambda </math> = Diagonal (<math> \lambda _{1}^{0}, .... \lambda _{n}^{0} </math> )<br>
'''e<sup>T</sup>''' = (1 .....1) as vector of all 1's.<br>
'''e<sup>T</sup>''' = (1 .....1) as vector of all 1's.<br>
Using these newly defined terms, the equation (7) can be written as: <br>
Using these newly defined terms, the equation above can be written as: <br>
<math> X\cdot \lambda \cdot e  = \mu \cdot e  </math> <br>
<math> X\cdot \lambda \cdot e  = \mu \cdot e  </math> <br>


=== Iterations using Newton's Method ===
=== Iterations using Newton's Method ===
Now we employ the Newton's iterative method to solve the following equations: <br>
Employing the Newton's iterative method to solve the following equations: <br>
<math> Ax - b = 0 </math>  ............................................................................................................................................................(8) <br>
<math> Ax - b = 0 </math>  <br>
<math> A^{T}y + \lambda  = c </math>  .........................................................................................................................................................(9) <br>
<math> A^{T}y + \lambda  = c </math>  <br>
<math> X\cdot \lambda \cdot e  - \mu \cdot e  = 0</math> ............................................................................................................................................ (10) <br>
<math> X\cdot \lambda \cdot e  - \mu \cdot e  = 0</math> <br>
Suppose we start with definition of starting point that lies within feasible region as (x<sup>0</sup>, y<sup>0</sup>, λ <sup>0</sup>) such that x<sup>0</sup>>0 and λ <sup>0</sup> >0. Also let us define 2 residual vectors for both the primal and dual equations: <br>
With definition of starting point that lies within feasible region as <math> \left ( x^{0},y^{0},\lambda ^{0} \right ) </math> such that <math> x^{0}> 0 \, and \lambda ^{0}> 0 </math>.
<math> \delta _{p} = b - A\cdot x^{0} </math> .....................................................................................................................................................(11) <br>
Also defining 2 residual vectors for both the primal and dual equations: <br>
<math> \delta _{d} = c - A^{0}\cdot y^{0} - \lambda ^{0} </math> ..........................................................................................................................................(12) <br>
<math> \delta _{p} = b - A\cdot x^{0} </math> <br>
<math> \delta _{d} = c - A^{0}\cdot y^{0} - \lambda ^{0} </math> <br>


Applying Newton's Method to solve equations (8) - (12) we get: <br>
Applying Newton's Method to solve above equations: <br>
<math> \begin{bmatrix}
<math> \begin{bmatrix}
A & 0 & 0\\  
A & 0 & 0\\  
Line 82: Line 86:
\end{bmatrix}
\end{bmatrix}
</math><br>
</math><br>
So a single iteration of Newton's method involves the following equations. For each iteration, we solve for the next value of x<sup>k+1</sup>, y<sup>k+1</sup> and λ<sup>k+1</sup>: <br>
So a single iteration of Newton's method involves the following equations. For each iteration, we solve for the next value of <math> x^{k+1},y^{k+1},\lambda ^{k+1} </math>: <br>
<math> (A\lambda ^{-1}XA^{T})\delta _{y} = b- \mu A\lambda^{-1} + A\lambda ^{-1}X\delta _{d} </math> .....................................................................................................(13) <br>
<math> (A\lambda ^{-1}XA^{T})\delta _{y} = b- \mu A\lambda^{-1} + A\lambda ^{-1}X\delta _{d} </math> <br>
<math> \delta _{\lambda} = \delta _{d}\cdot A^{T}\delta _{y} </math> ......................................................................................................................................................(14) <br>
<math> \delta _{\lambda} = \delta _{d}\cdot A^{T}\delta _{y} </math> <br>
<math> \delta _{x} = \lambda ^{-1}\left [ \mu \cdot e - X\lambda e -\lambda \delta _{z}\right ] </math> ............................................................................................................................(15) <br>
<math> \delta _{x} = \lambda ^{-1}\left [ \mu \cdot e - X\lambda e -\lambda \delta _{z}\right ] </math> <br>
<math> \alpha _{p} = min\left \{ \frac{-x_{j}}{\delta _{x_{j}}} \right \} </math> for  <math> \delta x_{j} < 0 </math> ........................................................................................................................ (16) <br>
<math> \alpha _{p} = min\left \{ \frac{-x_{j}}{\delta _{x_{j}}} \right \} </math> for  <math> \delta x_{j} < 0 </math> <br>
<math> \alpha _{d} = min\left \{ \frac{-\lambda_{j}}{\delta _{\lambda_{j}}} \right \} </math> for  <math> \delta \lambda_{j} < 0 </math> ........................................................................................................................ (17) <br><br>
<math> \alpha _{d} = min\left \{ \frac{-\lambda_{j}}{\delta _{\lambda_{j}}} \right \} </math> for  <math> \delta \lambda_{j} < 0 </math> <br><br>


The value of the the following variables for next iteration (+1) is determined by: <br>
The value of the the following variables for next iteration (+1) is determined by: <br>
Line 94: Line 98:
<math> \lambda^{k+1} = \lambda^{k} + \alpha _{d}\cdot \delta _{\lambda} </math> <br>
<math> \lambda^{k+1} = \lambda^{k} + \alpha _{d}\cdot \delta _{\lambda} </math> <br>


The quantities α<sup>p</sup> and α<sup>d</sup> are positive with 0 ≤ α<sub>p</sub>, α<sub>d</sub> ≤ 1. <br>
The quantities <math> \alpha _{p} </math> and <math> \alpha _{d} </math> are positive with <math> 0\leq \alpha _{p},\alpha _{d}\leq 1 </math>. <br>
After each iteration of Newton's method, we assess the duality gap that is given by the expression below and compare it against a small value <big>ε</big> <br>
After each iteration of Newton's method, we assess the duality gap that is given by the expression below and compare it against a small value <big>ε</big> <br>
<math> \frac{c^{T}x^{k}-b^{T}y^{k}}{1+\left | b^{T}y^{k} \right |} \leq \varepsilon </math> <br>
<math> \frac{c^{T}x^{k}-b^{T}y^{k}}{1+\left | b^{T}y^{k} \right |} \leq \varepsilon </math> <br>
The value of <big>ε</big> can be chosen to be something small 10<sup>-6</sup>, which essentially is the permissible duality gap for the problem. <br><br>
The value of <big>ε</big> can be chosen to be something small 10<sup>-6</sup>, which essentially is the permissible duality gap for the problem. <br>
 
 
 
 


== Numerical Example ==
== Numerical Example ==


Maximize<br>
<math> 3X_{1} + 3X_{2} </math><br>


Maximize 3X<sub>1</sub> + 3X<sub>2</sub>
such that<br>
 
<math> X_{1} + X_{2} \leq 4, </math><br>  
such that X<sub>1</sub> + X<sub>2</sub> ≤ 4, X<sub>1</sub> ≥ 0, X<sub>2</sub> ≥ 0
<math> X_{1} \geq 0, </math><br>
 
<math> X_{2} \geq 0, </math><br>


Barrier form of the above primal problem is as written below:
Barrier form of the above primal problem is as written below:




P(X,μ) = 3X<sub>1</sub> + 3X<sub>2</sub> + μ.log(4-X<sub>1</sub> - X<sub>2</sub>) + μ.log(X<sub>1</sub>) + μ.log(X<sub>2</sub>)
<math> P(X,\mu) = 3X_{1} + 3X_{2} + \mu.log(4-X_{1} - X_{2}) + \mu.log(X_{1}) + \mu.log(X_{2})</math><br>  




The Barrier function is always concave, since the problem is a maximization problem, there will be one and only one solution. In order to find the maximum point on the concave function we take a derivate and set it to zero.  
The Barrier function is always concave, since the problem is a maximization problem, there will be one and only one solution. In order to find the maximum point on the concave function we take a derivate and set it to zero.  


Taking partial derivative and setting to 0, we get the below equations
Taking partial derivative and setting to zero, we get the below equations
 


<math> \frac{\partial P(X,\mu)}{\partial X_{1}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{1}} = 0</math> .......... (1)<br>


<math> \frac{\partial P(X,\mu)}{\partial X_{2}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{2}} = 0</math> .......... (2)<br>
<math> \frac{\partial P(X,\mu)}{\partial X_{1}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{1}} = 0</math> <br>


From the equation (1) and (2) we can derive that X<sub>1</sub> = X<sub>2</sub> .......... (3)
<math> \frac{\partial P(X,\mu)}{\partial X_{2}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{2}} = 0</math> <br>


Using above equations the following can be derived: <math> X_{1} = X_{2}</math> <br>


Plugging (3) in (1) we get
Hence the following can be concluded


<math> 3 - \frac{\mu}{(4-2X_{1})} + \frac{\mu}{X_{1}} = 0 </math><br>
<math> 3 - \frac{\mu}{(4-2X_{1})} + \frac{\mu}{X_{1}} = 0 </math><br>
Line 144: Line 145:




Taking only take the positive value of X<sub>1</sub> and X<sub>2</sub> from the above equation as X<sub>1</sub>≥0 and X<sub>2</sub>≥0 we can solve X<sub>1</sub> and X<sub>2</sub> for different values of μ. The outcome of such iterations is listed in the table below.  
Taking only take the positive value of <math> X_{1} </math> and <math> X_{2} </math> from the above equation as <math> X_{1} \geq 0 </math> and <math> X_{2} \geq 0</math> we can solve <math>X_{1}</math> and <math>X_{2}</math> for different values of <math>\mu</math>. The outcome of such iterations is listed in the table below.  


{| class="wikitable"
{| class="wikitable"
|+ Objective and Barrier Function w.r.t  X<sub>1</sub>, X<sub>2</sub> and μ
|+ Objective & Barrier Function w.r.t  <math>X_{1}</math>, <math>X_{2}</math> and <math>\mu</math>
|-
|-
! μ !! X<sub>1</sub> !! X<sub>2</sub> !! P(X, μ) !! f(x)
! <math>\mu</math> !! <math>X_{1}</math> !! <math>X_{2}</math> !! <math>P(X, \mu)</math> !! <math>f(x)</math>
|-
|-
| 0 || 2 || 2 || 12 || 12
| 0 || 2 || 2 || 12 || 12
|-
|-
| 0.01 || 1.998 || 1.998 || 11.947 || 11.990
| 0.01 || 1.998 || 1.998 || 11.947 || 11.990  
|-
|-
| 0.1 || 1.984 || 1.984 || 11.697 || 11.902
| 0.1 || 1.984 || 1.984 || 11.697 || 11.902  
|-
|-
| 1 || 1.859 || 1.859 || 11.128 || 11.152
| 1 || 1.859 || 1.859 || 11.128 || 11.152  
|-
|-
| 10|| 1.486 || 1.486 || 17.114 || 8.916
| 10 || 1.486 || 1.486 || 17.114 || 8.916  
|-
|-
| 100|| 1.351 || 1.351 || 94.357 || 8.105
| 100 || 1.351 || 1.351 || 94.357 || 8.105  
|-
|-
| 1000|| 1.335 || 1.335|| 871.052 || 8.011
| 1000 || 1.335 || 1.335 || 871.052 || 8.011  
|}
|}


From the above table it can be seen that:  
From the above table it can be seen that:  
# as μ gets close to zero, the Barrier Function becomes tight and close to the original function.  
# as <math>\mu</math> gets close to zero, the Barrier Function becomes tight and close to the original function.  
# at μ=0 the optimal solution is achieved.
# at <math>\mu=0</math> the optimal solution is achieved.




Summary:
Summary:
Maximum Value of Objective function =12 <br>
Maximum Value of Objective function <math>=12</math> <br>
Optimal points X<sub>1</sub> = 2 and X<sub>2</sub> = 2
Optimal points <math>X_{1} = 2 </math> and <math>X_{2} = 2</math>
 
The Newton's Method can also be applied to solve linear programming problems as indicated in the "Theory and Algorithm" section above. The solution to linear programming problems as indicated in this section "Numerical Example", will be similar to quadratic equation as obtained above and will converge in one iteration.


== Applications ==
== Applications ==
Primal-Dual interior-point (PDIP) methods  are commonly used in optimal power flow (OPF), in this case what is being looked is to maximize user utility and minimize operational cost satisfying operational and physical constraints.  The solution to the OPF needs to be available to grid operators in few minutes or seconds due changes fluctuations in loads or power generation.  Newton-based primal-dual interior point can achieve fast convergence in this OPF optimization problem<sup>1</sup>
Primal-Dual interior-point (PDIP) methods  are commonly used in optimal power flow (OPF), in this case what is being looked is to maximize user utility and minimize operational cost satisfying operational and physical constraints.  The solution to the OPF needs to be available to grid operators in few minutes or seconds due to changes and fluctuations in loads during power generation.  Newton-based primal-dual interior point can achieve fast convergence in this OPF optimization problem. <ref> A. Minot, Y. M. Lu and N. Li, "A parallel primal-dual interior-point method for DC optimal power flow," 2016 Power Systems Computation Conference (PSCC), Genoa, 2016, pp. 1-7, doi: 10.1109/PSCC.2016.7540826. </ref>
 
Another application of the PDIP  is for the minimization of losses and cost in the generation and transmission in hydroelectric power systems<sup>2</sup>
 
PDIP are commonly used in imaging processing.  One application is for image deblurring, in this case the constrained deblurring problem is formulated as primal-dual.  The constrained primal-dual  is solved using a semi-smooth Newton’s method<sup>3</sup>
 
PDIP can be utilized to obtain a general formula for a shape derivative of the potential energy, describing the energy release rate for curvilinear cracks.  Problems on cracks and their evolution have important application in engineering and mechanical sciences<sup>4</sup>.


Another application of the PDIP  is for the minimization of losses and cost in the generation and transmission in hydroelectric power systems. <ref> L. M. Ramos Carvalho and A. R. Leite Oliveira, "Primal-Dual Interior Point Method Applied to the Short Term Hydroelectric Scheduling Including a Perturbing Parameter," in IEEE Latin America Transactions, vol. 7, no. 5, pp. 533-538, Sept. 2009, doi: 10.1109/TLA.2009.5361190. </ref>


PDIP are commonly used in imaging processing.  One these applications is for image deblurring, in this case the constrained deblurring problem is formulated as primal-dual.  The constrained primal-dual is solved using a semi-smooth Newton’s method. <ref> D. Krishnan, P. Lin and A. M. Yip, "A Primal-Dual Active-Set Method for Non-Negativity Constrained Total Variation Deblurring Problems," in IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2766-2777, Nov. 2007, doi: 10.1109/TIP.2007.908079. </ref>


PDIP can be utilized to obtain a general formula for a shape derivative of the potential energy describing the energy release rate for curvilinear cracks.  Problems on cracks and their evolution have important application in engineering and mechanical sciences. <ref> V. A. Kovtunenko, Primal–dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration, IMA Journal of Applied Mathematics, Volume 71, Issue 5, October 2006, Pages 635–657 </ref>


== Conclusion ==
== Conclusion ==
Line 195: Line 195:
== References ==
== References ==
<references />
<references />
* "''Practical Optimization - Algorithms and Engineering Applications''" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6
* "''Linear Programming - Foundations and Extensions - 3<sup>rd</sup> edition''" by Robert J Vanderbei, ISBN-113: 978-0-387-74387-5
* "''Computational Experience with Primal-Dual Interior Point Method for Linear Programming''" by Irvin Lustig, Roy Marsten, David Shanno
* 1) A. Minot, Y. M. Lu and N. Li, "A parallel primal-dual interior-point method for DC optimal power flow," ''2016 Power Systems Computation Conference (PSCC)'', Genoa, 2016, pp. 1-7, doi: 10.1109/PSCC.2016.7540826.
* 2) L. M. Ramos Carvalho and A. R. Leite Oliveira, "Primal-Dual Interior Point Method Applied to the Short Term Hydroelectric Scheduling Including a Perturbing Parameter," in ''IEEE Latin America Transactions'', vol. 7, no. 5, pp. 533-538, Sept. 2009, doi: 10.1109/TLA.2009.5361190.
* 3) D. Krishnan, P. Lin and A. M. Yip, "A Primal-Dual Active-Set Method for Non-Negativity Constrained Total Variation Deblurring Problems," in IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2766-2777, Nov. 2007, doi: 10.1109/TIP.2007.908079.
* 4) V. A. Kovtunenko, Primal–dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration, ''IMA Journal of Applied Mathematics'', Volume 71, Issue 5, October 2006, Pages 635–657,

Latest revision as of 06:31, 21 December 2020

Authors: Tomas Lopez Lauterio, Rohit Thakur and Sunil Shenoy (SysEn 5800 Fall 2020)

Introduction

Linear programming problems seek to optimize linear functions given linear constraints. There are several applications of linear programming including inventory control, production scheduling, transportation optimization and efficient manufacturing processes. Simplex method has been a very popular method to solve these linear programming problems and has served these industries well for a long time. But over the past 40 years, there have been significant number of advances in different algorithms that can be used for solving these types of problems in more efficient ways, especially where the problems become very large scale in terms of variables and constraints.[1] [2] In early 1980s Karmarkar (1984) [3] published a paper introducing interior point methods to solve linear-programming problems. A simple way to look at differences between simplex method and interior point method is that a simplex method moves along the edges of a polytope towards a vertex having a lower value of the cost function, whereas an interior point method begins its iterations inside the polytope and moves towards the lowest cost vertex without regard for edges. This approach reduces the number of iterations needed to reach that vertex, thereby reducing computational time needed to solve the problem.

Lagrange Function

Before getting too deep into description of Interior point method, there are a few concepts that are helpful to understand. First key concept to understand is related to Lagrange function. Lagrange function incorporates the constraints into a modified objective function in such a way that a constrained minimizer is connected to an unconstrained minimizer for the augmented objective function , where the augmentation is achieved with Lagrange multipliers. [4][5]
To illustrate this point, consider a simple an optimization problem:
minimize
subject to:
where, is assumed to have a full row rank Lagrange function can be laid out as:

where, introduced in this equation is called Lagrange Multiplier.

Newton's Method

Another key concept to understand is regarding solving linear and non-linear equations using Newton's methods. Assume an unconstrained minimization problem in the form:
minimize , where is a real valued function with variables.
A local minimum for this problem will satisfy the following system of equations:

The Newton's iteration looks like:


Theory and algorithm

Visualization of Central Path method in Interior point

Given a linear programming problem with constraint equations that have inequality terms, the inequality term is typically replaced with an equality term using slack variables. The new reformulation can be discontinuous in nature and to replace the discontinuous function with a smoother function, a logarithmic form of this reformulation is utilized. This nonlinear objective function is called "Logarithmic Barrier Function" The process involves starting with formation of a primal-dual pair of linear programs and then using "Lagrangian function" form on the "Barrier function" to convert the constrained problems into unconstrained problems. These unconstrained problems are then solved using Newton's method as shown above.

Problem Formulation

Consider a combination of primal-dual problem below:
(Primal Problem formulation)
→ minimize
Subject to: and
(Dual Problem formulation)
→ maximize
Subject to: and
vector introduced represents the slack variables.

The Lagrangian functional form is used to configure two equations using "Logarithmic Barrier Function" for both primal and dual forms mentioned above:
Lagrangian equation for Primal using Logarithm Barrier Function :
Lagrangian equation for Dual using Logarithm Barrier Function  :

Taking the partial derivatives of Lp and Ld with respect to variables , and forcing these terms to zero, we get the following equations:
and
and
for j= 1,2,.......n

where, is strictly positive scaler parameter. For each , the vectors in the set satisfying above equations, can we viewed as set of points in , , respectively, such that when varies, the corresponding points form a set of trajectories called "Central Path". The central path lies in the "Interior" of the feasible regions. There is a sample illustration of "Central Path" method in figure to right. Starting with a positive value of and as approaches 0, the optimal point is reached.

Let Diagonal[...] denote a diagonal matrix with the listed elements on its diagonal. Define the following:
X = Diagonal []
= Diagonal ( )
eT = (1 .....1) as vector of all 1's.
Using these newly defined terms, the equation above can be written as:

Iterations using Newton's Method

Employing the Newton's iterative method to solve the following equations:



With definition of starting point that lies within feasible region as such that . Also defining 2 residual vectors for both the primal and dual equations:


Applying Newton's Method to solve above equations:

So a single iteration of Newton's method involves the following equations. For each iteration, we solve for the next value of :



for
for

The value of the the following variables for next iteration (+1) is determined by:



The quantities and are positive with .
After each iteration of Newton's method, we assess the duality gap that is given by the expression below and compare it against a small value ε

The value of ε can be chosen to be something small 10-6, which essentially is the permissible duality gap for the problem.

Numerical Example

Maximize

such that



Barrier form of the above primal problem is as written below:




The Barrier function is always concave, since the problem is a maximization problem, there will be one and only one solution. In order to find the maximum point on the concave function we take a derivate and set it to zero.

Taking partial derivative and setting to zero, we get the below equations




Using above equations the following can be derived:

Hence the following can be concluded



The above equation can be converted into a quadratic equation as below:



The solution to the above quadratic equation can be written as below:



Taking only take the positive value of and from the above equation as and we can solve and for different values of . The outcome of such iterations is listed in the table below.

Objective & Barrier Function w.r.t , and
0 2 2 12 12
0.01 1.998 1.998 11.947 11.990
0.1 1.984 1.984 11.697 11.902
1 1.859 1.859 11.128 11.152
10 1.486 1.486 17.114 8.916
100 1.351 1.351 94.357 8.105
1000 1.335 1.335 871.052 8.011

From the above table it can be seen that:

  1. as gets close to zero, the Barrier Function becomes tight and close to the original function.
  2. at the optimal solution is achieved.


Summary: Maximum Value of Objective function
Optimal points and

The Newton's Method can also be applied to solve linear programming problems as indicated in the "Theory and Algorithm" section above. The solution to linear programming problems as indicated in this section "Numerical Example", will be similar to quadratic equation as obtained above and will converge in one iteration.

Applications

Primal-Dual interior-point (PDIP) methods  are commonly used in optimal power flow (OPF), in this case what is being looked is to maximize user utility and minimize operational cost satisfying operational and physical constraints. The solution to the OPF needs to be available to grid operators in few minutes or seconds due to changes and fluctuations in loads during power generation. Newton-based primal-dual interior point can achieve fast convergence in this OPF optimization problem. [6]

Another application of the PDIP  is for the minimization of losses and cost in the generation and transmission in hydroelectric power systems. [7]

PDIP are commonly used in imaging processing.  One these applications is for image deblurring, in this case the constrained deblurring problem is formulated as primal-dual.  The constrained primal-dual is solved using a semi-smooth Newton’s method. [8]

PDIP can be utilized to obtain a general formula for a shape derivative of the potential energy describing the energy release rate for curvilinear cracks.  Problems on cracks and their evolution have important application in engineering and mechanical sciences. [9]

Conclusion

The primal-dual interior point method is a good alternative to the simplex methods for solving linear programming problems. The primal dual method shows superior performance and convergence on many large complex problems. simplex codes are faster on small to medium problems, interior point primal-dual are much faster on large problems.


References

  1. "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6
  2. "Linear Programming - Foundations and Extensions - 3rd edition" by Robert J Vanderbei, ISBN-113: 978-0-387-74387-5.
  3. N Karmarkar, "A new Polynomial - Time algorithm for linear programming", Combinatorica, VOl. 4, No. 8, 1984, pp. 373-395.
  4. Computational Experience with Primal-Dual Interior Point Method for Linear Programming" by Irvin Lustig, Roy Marsten, David Shanno
  5. "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6
  6. A. Minot, Y. M. Lu and N. Li, "A parallel primal-dual interior-point method for DC optimal power flow," 2016 Power Systems Computation Conference (PSCC), Genoa, 2016, pp. 1-7, doi: 10.1109/PSCC.2016.7540826.
  7. L. M. Ramos Carvalho and A. R. Leite Oliveira, "Primal-Dual Interior Point Method Applied to the Short Term Hydroelectric Scheduling Including a Perturbing Parameter," in IEEE Latin America Transactions, vol. 7, no. 5, pp. 533-538, Sept. 2009, doi: 10.1109/TLA.2009.5361190.
  8. D. Krishnan, P. Lin and A. M. Yip, "A Primal-Dual Active-Set Method for Non-Negativity Constrained Total Variation Deblurring Problems," in IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2766-2777, Nov. 2007, doi: 10.1109/TIP.2007.908079.
  9. V. A. Kovtunenko, Primal–dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration, IMA Journal of Applied Mathematics, Volume 71, Issue 5, October 2006, Pages 635–657