Difference between revisions of "Convex generalized disjunctive programming (GDP)"

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 12: Line 12:
 
<math>\begin{align} \min z=f(x)\\
 
<math>\begin{align} \min z=f(x)\\
  
       s.t.g(x) <= 0\\
+
       s.t.g(x) \leq 0\\
 +
rki(x) \leg M^ki(1-yki)
  
 
       m_i\ge0,\quad \forall i \in I\\
 
       m_i\ge0,\quad \forall i \in I\\

Revision as of 16:55, 21 November 2020

Edited By: Nicholas Schafhauser, Blerand Qeriqi, Ryan Cuppernull

Introduction

Theory

Methodology

The two most common ways of reformulating a GDP problem into an MINLP are through Big-M (BM) and Hull Reformulation (HR). BM is the simpler of the two, while HR results in tighter relaxation (smaller feasible region) and faster solution times. (https://kilthub.cmu.edu/articles/A_hierarchy_of_relaxations_for_nonlinear_convex_generalized_disjunctive_programming/6466535)

Below is an example of the reformulation of the GDP problem from the Theory section reformulated into an MINLP by using the Big-M method.

Failed to parse (unknown function "\leg"): {\displaystyle \begin{align} \min z=f(x)\\ s.t.g(x) \leq 0\\ rki(x) \leg M^ki(1-yki) m_i\ge0,\quad \forall i \in I\\ y_j\in {0,1},\quad \forall j \in J \end{align}}

Numerical Example

Applications

Conclusion

References