Outer-approximation (OA): Difference between revisions
(29 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
== Theory == | == Theory == | ||
The Outer-Approximation (OA) algorithm was | The Outer-Approximation (OA) algorithm was proposed by Duran and and Grossmann in 1986 to solve MINLP problems.<ref name="Grossman">Duran, M. A., & Grossmann, I. E. (1986). [https://link.springer.com/article/10.1007/BF02592064 “An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Mathematical programming”], 36(3), 307-339.</ref> The basic idea of the OA method is to solve a sequence of nonlinear programming sub-problems and relaxed mixed-integer linear master problem successfully.<ref name=" Grossman" /> In a minimization problem, for example, the NLP subproblems appear for a fixed number of binary variables, and involve the optimization of continuous variables with an upper bound to the original MINLP is obtained. The MILP master problem provides a global linear approximation to the MINLP in which the objective function is underestimated and the nonlinear feasible region is overestimated. In addition, the linear approximations to the nonlinear equations are relaxed as inequalities. This MILP master problem accumulates the different linear approximations of previous iterations so as to produce an increasingly better approximation of the original MINLP problem. At each iteration the master problem predicts new values of the binary variables and a lower bound to the objective function. Finally, the search is terminated when no lower bound can be found below the current best upper bound which then leads to an infeasible MILP. <ref name="Book">Biegler, L. T., Grossmann, I. E., & Westerberg, A. W. (1997). [https://www.pearson.com/store/p/systematic-methods-of-chemical-process-design/P100001974600/9780134924229 “Systematic methods for chemical process design”]. </ref> | ||
== Algorithm== | == Algorithm== | ||
Consider the following MINLP formulation : <ref name=" Book" /><br> | Consider the following MINLP formulation : <ref name=" Book" /><br> | ||
min <math >~~~~ Z= c^{T}y+f(x) </math> <br> | |||
s.t. <math >~~~~~h(x)=0</math> <br> | |||
<math | <math >~~~~~~~~~~g(x) \leq 0</math><br> | ||
<math | <math >~~~~~~~~~~Ax=a</math><br> | ||
<math | <math >~~~~~~~~~~By+Cx\leq d</math><br> | ||
<math | <math >~~~~~~~~~~Ey\leq e</math><br> | ||
<math | <math >~~~~~~~~~~x \in X=\big\{x \mid x \in R^{n}, x^{L} \leq x \leq x^{U}\big\}</math><br> | ||
<math | <math >~~~~~~~~~~y_{1},y_{2} \in \big\{0,1\big\}^{t} </math><br> | ||
The specific steps of OA algorithm, assuming feasible solutions for the NLP subproblems, are as follows:<ref name=" Book" /><br> | The specific steps of OA algorithm, assuming feasible solutions for the NLP subproblems, are as follows:<ref name=" Book" /><br> | ||
'''Step 1:''' Select an initial value of the binary variables <math display=inline>y^{1}</math>. Set the iteration counter <math display=inline>K=1</math>. Initialize the lower bound <math display=inline>Z_{L}=-\infty</math>, and the upper bound <math display=inline>Z_{U}=+\infty</math>. <br> | '''Step 1:''' Select an initial value of the binary variables <math display=inline>y^{1}</math>. Set the iteration counter <math display=inline>K=1</math>. Initialize the lower bound <math display=inline>Z_{L}=-\infty</math>, and the upper bound <math display=inline>Z_{U}=+\infty</math>. <br> | ||
'''Step 2:''' Solve the NLP subproblem for the fixed value <math display=inline>y^{k}</math>, to obtain the solution <math display=inline>x^{k}</math> and the multipliers <math display=inline>\lambda^{k}</math> for the equations <math display=inline>h(x)=0.</math> <br> | '''Step 2:''' Solve the NLP subproblem for the fixed value <math display=inline>y^{k}</math>, to obtain the solution <math display=inline>x^{k}</math> and the multipliers <math display=inline>\lambda^{k}</math> for the equations <math display=inline>h(x)=0.</math> <br> | ||
<math | |||
<math | <math > Z(y^{k})=min~~ c^{T}y^{k}+f(x)</math><br> | ||
<math | <math >s.t.~~~ h(x)=0</math><br> | ||
<math | <math >~~~~~~~~~g(x)\leq0</math><br> | ||
<math | <math >~~~~~~~~~h(x)\leq0</math><br> | ||
<math | <math >~~~~~~~~~Ax=a</math><br> | ||
<math | <math >~~~~~~~~~Cx\leq d-B~~y^{k}</math><br> | ||
<math >~~~~~~~~~x \in X</math><br> | |||
'''Step 3:''' Update the bounds and prepare the information for the master problem:<br> | '''Step 3:''' Update the bounds and prepare the information for the master problem:<br> | ||
'''a.''' Update the current upper bound; if <math display=inline>Z(y^{k})<Z_{U}</math>, set <math display=inline> Z_{U}=Z(y^{k}), ~~y^{*}=y^{k},~~x^{*}=x^{k}.</math><br> | '''a.''' Update the current upper bound; if <math display=inline>Z(y^{k})<Z_{U}</math>, set <math display=inline> Z_{U}=Z(y^{k}), ~~y^{*}=y^{k},~~x^{*}=x^{k}.</math><br> | ||
'''b.''' Derive the integer cut, <math display=inline>IC^{k}</math>, to make infeasible the choice of the binary <math display=inline>y^{k}</math> from the subsequent iterations: | '''b.''' Derive the integer cut, <math display=inline>IC^{k}</math>, to make infeasible the choice of the binary <math display=inline>y^{k}</math> from the subsequent iterations:<br> | ||
<math | |||
<math | <math >IC^{K}= \big\{ \sum_{ieB^{K}} y_{i}-\sum_{ieN^{K}} y \leq \mid B^{K}\mid-1 \big\} </math> <br> | ||
'''c.''' Define the diagonal direction matrix <math display=inline>T^{K}</math> for relaxing the equations into inequalities based on the sign of the multipliers <math display=inline>\lambda^{k}</math>. The diagonal elements are given by: | <math >where~~B^{k}=\big\{i\mid y_{i}^{K}=1\big\},~~ N^{k}=\big\{i\mid y_{i}^{K}=0\big\} </math><br> | ||
<math | |||
<math | '''c.''' Define the diagonal direction matrix <math display=inline>T^{K}</math> for relaxing the equations into inequalities based on the sign of the multipliers <math display=inline>\lambda^{k}</math>. The diagonal elements are given by:<br> | ||
'''d.''' Obtain the following linear outer-approximations for the nonlinear terms <math display=inline>f(x),~~h(x),~~g(x)</math> by performing first order linearizations at the point <math display=inline>x^{K}</math> : | |||
<math | <math > t_{jj}^{K} =\begin{cases}-1 & if ~~ \lambda_{j}^{K}<0\\+1 & if ~~ \lambda_{j}^{K}>0\\0 & if ~~ \lambda_{j}^{K}=0\end{cases} </math><br> | ||
<math | <math >where~~j=1,2...m</math><br> | ||
<math | |||
'''d.''' Obtain the following linear outer-approximations for the nonlinear terms <math display=inline>f(x),~~h(x),~~g(x)</math> by performing first order linearizations at the point <math display=inline>x^{K}</math> : <br> | |||
<math > (w^{K})^{T}x-w_{c}^{K}=f(x^{K})+ \bigtriangledown f(x^{K})^{T}(x-x^{T}) </math><br> | |||
<math > R^{K}x-r^{K}=h(x^{K})+ \bigtriangledown h(x^{K})^{T}(x-x^{T}) </math><br> | |||
<math > S^{K}x-s^{K}=g(x^{K})+ \bigtriangledown g(x^{K})^{T}(x-x^{T}) </math><br> | |||
'''Step 4: a.''' Solve the following MILP master problem: | '''Step 4: a.''' Solve the following MILP master problem: | ||
<math | <math> Z_{L}^{K}=min~~ c^{T}y+ \mu </math><br> | ||
<math | <math >s.t.~~~(w^{k})x-\mu \leq w_{c}^{k} </math><br> | ||
<math | <math >~~~~~~~~~T^{k}R^{k}x\leq T^{k}r^{k}~~~~~~~~~~~~k=1,2...K </math><br> | ||
<math | <math>~~~~~~~~~ S^{k}x\leq s^{k}</math><br> | ||
<math | <math >~~~~~~~~~y \in IC^{k}</math><br> | ||
<math | <math >~~~~~~~~~ By+Cx \leq d</math><br> | ||
<math | <math >~~~~~~~~~ Ax=a</math><br> | ||
<math | <math>~~~~~~~~~ Ey \leq e</math><br> | ||
<math | <math >~~~~~~~~~ Z_{L}^{K-1} \leq c^{T}y+ \mu \leq Z_{U}</math><br> | ||
<math | <math >~~~~~~~~~ y \in \big\{0,1\big\}^t ~~~~ x\in X ~~~~ \mu \in R^{1} </math><br> | ||
'''b.''' If the MILP master problem has no feasible solution, stop. The optimal solution is <math display=inline>x^{*}, ~~y^{*}, ~~Z_{U},</math>.<br> | '''b.''' If the MILP master problem has no feasible solution, stop. The optimal solution is <math display=inline>x^{*}, ~~y^{*}, ~~Z_{U},</math>.<br> | ||
'''c.''' If the MILP master problem has a feasible solution, the new binary value <math display=inline>y^{K+1}</math> is obtained. Set <math display=inline>K=K+1</math>, return to step 2. | '''c.''' If the MILP master problem has a feasible solution, the new binary value <math display=inline>y^{K+1}</math> is obtained. Set <math display=inline>K=K+1</math>, return to step 2. | ||
Line 61: | Line 69: | ||
The following is a step-by-step solution for an MINLP optimization problem using Outer-Approximation method:<ref>You, F. (2021). Lecture on Mixed Integer Non-Linear Programming (MINLP). Archives for CHEME 6800 Computational Optimization (2021FA), Cornell University, Ithaca, NY.</ref> <br> | The following is a step-by-step solution for an MINLP optimization problem using Outer-Approximation method:<ref>You, F. (2021). Lecture on Mixed Integer Non-Linear Programming (MINLP). Archives for CHEME 6800 Computational Optimization (2021FA), Cornell University, Ithaca, NY.</ref> <br> | ||
'' | |||
'''''Problem'''''<br> | |||
<math>\begin{align} | |||
\min & \quad f(x)= y_{1} +y_{2} + \big(x_{1}\big)^{2} +\big(x_{2}\big)^{2} \\ | |||
s.t. & \quad \big(x_{1}-2\big)^{2}-x_{2} \leq 0 \\ | |||
& \quad x_{1}-2y_{1} \geq 0 \\ | |||
& \quad x_{1}-x_{2}-3 \big(1-y_{1}\big) \geq 0 \\ | |||
& \quad x_{1}+y_{1}-1\geq 0 \\ | |||
& \quad x_{2}-y_{2}\geq 0 \\ | |||
& \quad x_{1}+x_{2}\geq 3y_{1} \\ | |||
& \quad y_{1}+y_{2}\geq 1 \\ | |||
& \quad 0 \leq x_{1}\leq 4 \\ | |||
& \quad 0 \leq x_{2}\leq 4 \\ | |||
& \quad y_{1},y_{2} \in \big\{0,1\big\} | |||
\end{align} </math> | |||
'''''Solution'''''<br> | '''''Solution'''''<br> | ||
'''Step | '''Step 1:''' Select an initial value of the binary variables <math display=inline>y_{1}=y_{2}=1</math>. Set the iteration counter <math display=inline>K=1</math>. Initialize the lower bound <math display=inline>Z_{L}=-\infty</math>, and the upper bound <math display=inline>Z_{U}=+\infty</math>. <br> | ||
<math | |||
<math | '''Step 2:''' Solve the NLP subproblem for the fixed value <math display=inline>y^{k}</math>, to obtain the solution <math display=inline>x^{k}</math> and the multipliers <math display=inline>\lambda^{k}</math> for the equations <math display=inline>h(x)=0.</math> <br> | ||
'''''Iteration 1:'''''<br> | |||
<math>\begin{align} | |||
\min & \quad f= 2+ \big(x_{1}\big)^{2} +\big(x_{2}\big)^{2} \\ | |||
s.t. & \quad \big(x_{1}-2\big)^{2}-x_{2} \leq 0 \\ | |||
& \quad x_{1}-2 \geq 0 \\ | |||
& \quad x_{1}-x_{2} \geq 0 \\ | |||
& \quad x_{1} \geq 0 \\ | |||
& \quad x_{2}-1 \geq 0 \\ | |||
& \quad x_{1}+x_{2} \geq 3 \\ | |||
& \quad 0 \leq x_{1}\leq 4 \\ | |||
& \quad 0 \leq x_{2}\leq 4 | |||
\end{align} </math> | |||
''Solution: ''<math display=inline>x_{1}=2, x_{2}=1</math>, Upper Bound <math display=inline>Z_{U}=7</math><br> | |||
'''Step 3:''' Update the bounds and prepare the information for the master problem:<br> | |||
set <math display=inline> Z_{U}=7, ~~y_{1}=y_{2}=1,~~x^{*} =[2,1] .</math><br> | |||
Obtain the following linear outer-approximations for the nonlinear terms <math display=inline>f(x),~~g(x)</math> by performing first order linearizations at the point <math display=inline> x^{*} =[2,1] </math> : <br> | |||
<math >f\big(x\big) =\big( x_{1} \big)^{2} +\big( x_{2} \big)^{2},~~ \bigtriangledown f\big(x\big)=[2x_{1}~~~~2x_{1}]^{T} ~~for~~x^{*} =[2~~~~1]^{T} </math><br> | |||
<math >f\big(x^{*}\big)+ \bigtriangledown f\big(x^{*}\big)^{T}\big(x-x^{*}\big)=5+[4~~~~2] \begin{bmatrix}x_{1}-2 \\x_{2}-1 \end{bmatrix}=5+4\big(x_{1}-2\big)+2\big(x_{2}-1\big)</math><br> | |||
<br> | <br> | ||
<math | <math >g\big(x\big)=\big(x_{1}-2\big)^{2}-x_{2},~~ \bigtriangledown g\big(x\big)=[2x_{1}-4~~~~-1]^{T}~~for~~x^{*} =[2~~~~1]^{T} </math><br> | ||
<math >g\big(x^{*}\big)+ \bigtriangledown g\big(x^{*}\big)^{T}\big(x-x^{*}\big)=-1+[0~~~~-1] \begin{bmatrix}x_{1}-2 \\x_{2}-1 \end{bmatrix}=-x_{2}</math><br> | |||
< | |||
'' | '''Step 4:''' Solve the MILP master problem with OA for <math display=inline> x^{*} =[2,1] </math> : <br> | ||
'''Step | <math>\begin{align} | ||
\min & \quad \alpha \\ | |||
s.t. & \quad \alpha\geq y_{1}+y_{2}+5+4\big(x_{1}-2\big)+2\big(x_{2}-1\big) \\ | |||
& \quad -x_{2}\leq0 \\ | |||
& \quad x_{1}-2y_{1} \geq 0 \\ | |||
& \quad x_{1}-x_{2}-3 \big(1-y_{1}\big) \geq 0 \\ | |||
& \quad x_{1}+y_{1}-1\geq 0 \\ | |||
& \quad x_{2}-y_{2}\geq 0 \\ | |||
& \quad x_{1}+x_{2}\geq 3y_{1} \\ | |||
& \quad y_{1}+y_{2}\geq 1 \\ | |||
& \quad 0 \leq x_{1}\leq 4 \\ | |||
& \quad 0 \leq x_{2}\leq 4 \\ | |||
& \quad y_{1},y_{2} \in \big\{0,1\big\} | |||
\end{align} </math> | |||
''MILP Solution: ''<math display=inline>x_{1}=2, x_{2}=1,y_{1}=1, y_{2}=0</math>, Lower Bound <math display=inline>Z_{L}=6</math> <br> | |||
Lower Bound < Upper Bound, Integer cut: <math display=inline>y_{1}- y_{2}\leq 0</math> <br> | |||
Set iteration <math display=inline>K=K+1=2</math>, return to step 2. | |||
'''''Iteration 2:'''''<br> | |||
'''Step 2:''' Start from | |||
<math display=inline>y=[1,0]</math> and solve the NLP below: <br> | <math display=inline>y=[1,0]</math> and solve the NLP below: <br> | ||
<math>\begin{align} | |||
\min & \quad f= 1+ \big(x_{1}\big)^{2} +\big(x_{2}\big)^{2} \\ | |||
s.t. & \quad \big(x_{1}-2\big)^{2}-x_{2} \leq 0 \\ | |||
& \quad x_{1}-2 \geq 0 \\ | |||
& \quad x_{1}-x_{2} \geq 0 \\ | |||
& \quad x_{1} \geq 0 \\ | |||
& \quad x_{2} \geq 0 \\ | |||
& \quad x_{1}+x_{2} \geq 3 \\ | |||
''Solution: ''<math display=inline>x_{1}=2, x_{2}=1</math>, Upper Bound = 6 <br> | & \quad 0 \leq x_{1}\leq 4 \\ | ||
& \quad 0 \leq x_{2}\leq 4 | |||
\end{align} </math> | |||
''Solution: ''<math display=inline>x_{1}=2, x_{2}=1</math>, Upper Bound <math display=inline>Z_{U}=6</math> <br> | |||
Upper Bound = 6 = Lower Bound, Optimum!<br> | Upper Bound = 6 = Lower Bound, Optimum!<br> | ||
''Optimal Solution for the MINLP: ''<math display=inline>x_{1}=2, x_{2}=1,y_{1}=1, y_{2}=0</math><br> | ''Optimal Solution for the MINLP: ''<math display=inline>x_{1}=2, x_{2}=1,y_{1}=1, y_{2}=0</math><br> | ||
==Applications== | ==Applications== | ||
The ability to accurately model real-world problems has made MINLP an active research area and there exists a vast number of applications in fields such as engineering, computational chemistry, and finance. | The ability to accurately model real-world problems has made MINLP an active research area and there exists a vast number of applications in fields such as engineering, computational chemistry, and finance.<ref name "Hijazi">Hijazi, H., Bonami, P., & Ouorou, A. (2010). [http://www.optimization-online.org/DB_FILE/2012/08/3581.pdf “An outer-inner approximation for separable MINLPs”]. LIF.</ref> The followings are some MINLP applications that can be solved by the Outer approximation method: | ||
==== | ====Process Synthesis==== | ||
Process synthesis problems involving the selection of process units and their interconnection as well as the evaluation of the design and operating variables can be conceptually posed as large-scale MINLP problems, where 0-1 binary variables are used to denote the existence (or not) of process units, while continuous variables represent process design and operating values. For the case when multiple choices are possible, one can in fact develop valid linear outer-approximations that properly bound the nonconvex solution space in the MILP master problem.<ref>Grossmann, I. E. (1989). [https://kilthub.cmu.edu/articles/MINLP_optimization_strategies_and_algorithms_for_process_synthesis/6467159/files/11895611.pdf “MINLP optimization strategies and algorithms for process synthesis”]. </ref> | |||
==== | ====Biological kinetic model==== | ||
Consider a dynamic kinetic model describing the mechanism of a set of biochemical reactions. The goal is to determine the appropriate values of the model coefficients (e.g., rate constants, initial conditions, etc.), so as to minimize the sum-of-squares of the residuals between the simulated data provided by the model and the experimental observations. The problem solution strategy relies on reformulating the nonlinear dynamic optimization problem as a finite dimensional MINLP by applying a complete discretization using orthogonal collocation on finite elements. This MINLP is next solved using the outer approximation algorithm. <ref>Miró, A., Pozo, C., Guillén-Gosálbez, G., Egea, J. A., & Jiménez, L. (2012). [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-13-90 “Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems”]. BMC bioinformatics, 13(1), 1-12. </ref> | |||
==== | ====Material selection==== | ||
A thermal insulation system uses a series of heat intercepts and surrounding insulators to minimize the power required to maintain the heat intercepts at certain temperatures. The designer chooses the maximum number of intercepts, the thickness and area of each intercept, and the material of intercept from a discrete set of materials. The choice of material affects the thermal conductivity and total mass of the insulation system. Nonlinear functions in the model are required to accurately model system characteristics such as heat flow between the intercepts, thermal expansion, and stress constraints. Integer variables are used to model the discrete choice of the type of material to use in each layer. This problem can be modeled as MINLP and solved by outer approximation. <ref>Leyffer, S., Linderoth, J., Luedtke, J., Miller, A., & Munson, T. (2009, July).[ https://iopscience.iop.org/article/10.1088/1742-6596/180/1/012014/meta “Applications and algorithms for mixed integer nonlinear programming”]. In Journal of Physics: Conference Series (Vol. 180, No. 1, p. 012014). IOP Publishing. </ref> | |||
==Conclusion== | ==Conclusion== | ||
The outer approximation method is presented for solving MINLP problems of a particular class. Linearity of the integer (or discrete) variables, and convexity of the nonlinear functions involving continuous variables are the main features in the underlying mathematical structure. Based on principles of decomposition, outer-approximation and relaxation, the outer approximation algorithm effectively exploits the structure of the problems and consists of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a MILP master problem.<ref name=" Grossman" /> It should be noted that although the outer approximation algorithm relies on convexity assumptions in MINLP, it can also be applied to nonconvex problems.<ref>Viswanathan, J., & Grossmann, I. E. (1990). [https://www.sciencedirect.com/science/article/pii/0098135490870854 “A combined penalty function and outer-approximation method for MINLP optimization”]. Computers & Chemical Engineering, 14(7), 769-782.</ref> | |||
Many of the real-world problems that can be solved through outer approximation involve process synthesis where binary decisions represent the existence of process units and continuous variables represent operating values. Examples of real-world problems illustrated here also include a dynamic biological kinetic model and material selection for thermal insulation. | |||
==References== | ==References== |
Latest revision as of 23:12, 15 December 2021
Author: Yousef Aloufi (CHEME 6800 Fall 2021)
Introduction
Mixed-integer nonlinear programming (MINLP) deals with optimization problems by combining the modeling capabilities of mixed-integer linear programming (MILP) and nonlinear programming (NLP) into a powerful modeling framework. The integer variables make it possible to incorporate discrete decisions and logical relations in the optimization problems, and the combination of linear and nonlinear functions make it possible to accurately describe a variety of different phenomena. [1]
Although MINLPs are non-convex optimization problems because of some variables’ discreteness, the term convex MINLP is used to denote problems where the continuously relaxed feasible region described by the constraints and the objective function are convex. [2] Convex MINLP problems are an important class of problems, as the convex properties can be exploited to derive efficient decomposition algorithms. These decomposition algorithms rely on the solution of different subproblems derived from the original MINLP. Among these decomposition algorithms for MINLP, are Branch & Bound, Generalized Benders Decomposition, Outer Approximation, Partial Surrogate Cuts, Extended Cutting Plane , Feasibility Pump, and the center-cut method. [3]
MINLP problems can be solved with the branch and bound method. However, an important drawback of the branch and bound method for MINLP is that the solution of the NLP subproblems can be expensive since they cannot be readily updated as in the case of the MILP. Therefore, in order to reduce the computational expense involve in solving many NLP subproblems, we can resort to another method: Outer-Approximation.[4]
Theory
The Outer-Approximation (OA) algorithm was proposed by Duran and and Grossmann in 1986 to solve MINLP problems.[5] The basic idea of the OA method is to solve a sequence of nonlinear programming sub-problems and relaxed mixed-integer linear master problem successfully.[5] In a minimization problem, for example, the NLP subproblems appear for a fixed number of binary variables, and involve the optimization of continuous variables with an upper bound to the original MINLP is obtained. The MILP master problem provides a global linear approximation to the MINLP in which the objective function is underestimated and the nonlinear feasible region is overestimated. In addition, the linear approximations to the nonlinear equations are relaxed as inequalities. This MILP master problem accumulates the different linear approximations of previous iterations so as to produce an increasingly better approximation of the original MINLP problem. At each iteration the master problem predicts new values of the binary variables and a lower bound to the objective function. Finally, the search is terminated when no lower bound can be found below the current best upper bound which then leads to an infeasible MILP. [6]
Algorithm
Consider the following MINLP formulation : [6]
min
s.t.
The specific steps of OA algorithm, assuming feasible solutions for the NLP subproblems, are as follows:[6]
Step 1: Select an initial value of the binary variables . Set the iteration counter . Initialize the lower bound , and the upper bound .
Step 2: Solve the NLP subproblem for the fixed value , to obtain the solution and the multipliers for the equations
Step 3: Update the bounds and prepare the information for the master problem:
a. Update the current upper bound; if , set
b. Derive the integer cut, , to make infeasible the choice of the binary from the subsequent iterations:
c. Define the diagonal direction matrix for relaxing the equations into inequalities based on the sign of the multipliers . The diagonal elements are given by:
d. Obtain the following linear outer-approximations for the nonlinear terms by performing first order linearizations at the point :
Step 4: a. Solve the following MILP master problem:
b. If the MILP master problem has no feasible solution, stop. The optimal solution is .
c. If the MILP master problem has a feasible solution, the new binary value is obtained. Set , return to step 2.
Numerical Example
The following is a step-by-step solution for an MINLP optimization problem using Outer-Approximation method:[7]
Problem
Solution
Step 1: Select an initial value of the binary variables . Set the iteration counter . Initialize the lower bound , and the upper bound .
Step 2: Solve the NLP subproblem for the fixed value , to obtain the solution and the multipliers for the equations
Iteration 1:
Solution: , Upper Bound
Step 3: Update the bounds and prepare the information for the master problem:
set
Obtain the following linear outer-approximations for the nonlinear terms by performing first order linearizations at the point :
Step 4: Solve the MILP master problem with OA for :
MILP Solution: , Lower Bound
Lower Bound < Upper Bound, Integer cut:
Set iteration , return to step 2.
Iteration 2:
Step 2: Start from
and solve the NLP below:
Solution: , Upper Bound
Upper Bound = 6 = Lower Bound, Optimum!
Optimal Solution for the MINLP:
Applications
The ability to accurately model real-world problems has made MINLP an active research area and there exists a vast number of applications in fields such as engineering, computational chemistry, and finance.[8] The followings are some MINLP applications that can be solved by the Outer approximation method:
Process Synthesis
Process synthesis problems involving the selection of process units and their interconnection as well as the evaluation of the design and operating variables can be conceptually posed as large-scale MINLP problems, where 0-1 binary variables are used to denote the existence (or not) of process units, while continuous variables represent process design and operating values. For the case when multiple choices are possible, one can in fact develop valid linear outer-approximations that properly bound the nonconvex solution space in the MILP master problem.[9]
Biological kinetic model
Consider a dynamic kinetic model describing the mechanism of a set of biochemical reactions. The goal is to determine the appropriate values of the model coefficients (e.g., rate constants, initial conditions, etc.), so as to minimize the sum-of-squares of the residuals between the simulated data provided by the model and the experimental observations. The problem solution strategy relies on reformulating the nonlinear dynamic optimization problem as a finite dimensional MINLP by applying a complete discretization using orthogonal collocation on finite elements. This MINLP is next solved using the outer approximation algorithm. [10]
Material selection
A thermal insulation system uses a series of heat intercepts and surrounding insulators to minimize the power required to maintain the heat intercepts at certain temperatures. The designer chooses the maximum number of intercepts, the thickness and area of each intercept, and the material of intercept from a discrete set of materials. The choice of material affects the thermal conductivity and total mass of the insulation system. Nonlinear functions in the model are required to accurately model system characteristics such as heat flow between the intercepts, thermal expansion, and stress constraints. Integer variables are used to model the discrete choice of the type of material to use in each layer. This problem can be modeled as MINLP and solved by outer approximation. [11]
Conclusion
The outer approximation method is presented for solving MINLP problems of a particular class. Linearity of the integer (or discrete) variables, and convexity of the nonlinear functions involving continuous variables are the main features in the underlying mathematical structure. Based on principles of decomposition, outer-approximation and relaxation, the outer approximation algorithm effectively exploits the structure of the problems and consists of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a MILP master problem.[5] It should be noted that although the outer approximation algorithm relies on convexity assumptions in MINLP, it can also be applied to nonconvex problems.[12] Many of the real-world problems that can be solved through outer approximation involve process synthesis where binary decisions represent the existence of process units and continuous variables represent operating values. Examples of real-world problems illustrated here also include a dynamic biological kinetic model and material selection for thermal insulation.
References
- ↑ Kronqvist, J., Bernal, D. E., Lundell, A., & Westerlund, T. (2019). “A center-cut algorithm for quickly obtaining feasible solutions and solving convex MINLP problems”. Computers & Chemical Engineering, 122, 105-113.
- ↑ Kronqvist, J., Bernal, D. E., Lundell, A., & Grossmann, I. E. (2019). “A review and comparison of solvers for convex MINLP”. Optimization and Engineering, 20(2), 397-455.
- ↑ Bernala, D. E., Pengb, Z., Kronqvistc, J., & Grossmanna, I. E. (2021). “Alternative Regularizations for Outer-Approximation Algorithms for Convex MINLP”.
- ↑ Grossmann, I. E. (2002). “Review of nonlinear mixed-integer and disjunctive programming techniques”. Optimization and engineering, 3(3), 227-252.
- ↑ 5.0 5.1 5.2 Duran, M. A., & Grossmann, I. E. (1986). “An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Mathematical programming”, 36(3), 307-339.
- ↑ 6.0 6.1 6.2 Biegler, L. T., Grossmann, I. E., & Westerberg, A. W. (1997). “Systematic methods for chemical process design”.
- ↑ You, F. (2021). Lecture on Mixed Integer Non-Linear Programming (MINLP). Archives for CHEME 6800 Computational Optimization (2021FA), Cornell University, Ithaca, NY.
- ↑ Hijazi, H., Bonami, P., & Ouorou, A. (2010). “An outer-inner approximation for separable MINLPs”. LIF.
- ↑ Grossmann, I. E. (1989). “MINLP optimization strategies and algorithms for process synthesis”.
- ↑ Miró, A., Pozo, C., Guillén-Gosálbez, G., Egea, J. A., & Jiménez, L. (2012). “Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems”. BMC bioinformatics, 13(1), 1-12.
- ↑ Leyffer, S., Linderoth, J., Luedtke, J., Miller, A., & Munson, T. (2009, July).[ https://iopscience.iop.org/article/10.1088/1742-6596/180/1/012014/meta “Applications and algorithms for mixed integer nonlinear programming”]. In Journal of Physics: Conference Series (Vol. 180, No. 1, p. 012014). IOP Publishing.
- ↑ Viswanathan, J., & Grossmann, I. E. (1990). “A combined penalty function and outer-approximation method for MINLP optimization”. Computers & Chemical Engineering, 14(7), 769-782.