# Outer-approximation (OA)

Author: Yousef Aloufi (CHEME 6800 Fall 2021)

## Introduction

Mixed-integer nonlinear programming (MINLP) deals with optimization problems by combining the modeling capabilities of mixed-integer linear programming (MILP) and nonlinear programming (NLP) into a powerful modeling framework. The integer variables make it possible to incorporate discrete decisions and logical relations in the optimization problems, and the combination of linear and nonlinear functions make it possible to accurately describe a variety of different phenomena. 
Although MINLPs are non-convex optimization problems because of some variables’ discreteness, the term convex MINLP is used to denote problems where the continuously relaxed feasible region described by the constraints and the objective function are convex.  Convex MINLP problems are an important class of problems, as the convex properties can be exploited to derive efficient decomposition algorithms. These decomposition algorithms rely on the solution of different subproblems derived from the original MINLP. Among these decomposition algorithms for MINLP, are Branch & Bound, Generalized Benders Decomposition, Outer Approximation, Partial Surrogate Cuts, Extended Cutting Plane , Feasibility Pump, and the center-cut method. 
MINLP problems can be solved with the branch and bound method. However, an important drawback of the branch and bound method for MINLP is that the solution of the NLP subproblems can be expensive since they cannot be readily updated as in the case of the MILP. Therefore, in order to reduce the computational expense involve in solving many NLP subproblems, we can resort to another method: Outer-Approximation.

## Theory

The Outer-Approximation (OA) algorithm was proposed by Duran and and Grossmann in 1986 to solve MINLP problems. The basic idea of the OA method is to solve a sequence of nonlinear programming sub-problems and relaxed mixed-integer linear master problem successfully. In a minimization problem, for example, the NLP subproblems appear for a fixed number of binary variables, and involve the optimization of continuous variables with an upper bound to the original MINLP is obtained. The MILP master problem provides a global linear approximation to the MINLP in which the objective function is underestimated and the nonlinear feasible region is overestimated. In addition, the linear approximations to the nonlinear equations are relaxed as inequalities. This MILP master problem accumulates the different linear approximations of previous iterations so as to produce an increasingly better approximation of the original MINLP problem. At each iteration the master problem predicts new values of the binary variables and a lower bound to the objective function. Finally, the search is terminated when no lower bound can be found below the current best upper bound which then leads to an infeasible MILP. 

## Algorithm

Consider the following MINLP formulation : 
min $~~~~Z=c^{T}y+f(x)$ s.t. $~~~~~h(x)=0$ $~~~~~~~~~~g(x)\leq 0$ $~~~~~~~~~~Ax=a$ $~~~~~~~~~~By+Cx\leq d$ $~~~~~~~~~~Ey\leq e$ $~~~~~~~~~~x\in X={\big \{}x\mid x\in R^{n},x^{L}\leq x\leq x^{U}{\big \}}$ $~~~~~~~~~~y_{1},y_{2}\in {\big \{}0,1{\big \}}^{t}$ The specific steps of OA algorithm, assuming feasible solutions for the NLP subproblems, are as follows:
Step 1: Select an initial value of the binary variables ${\textstyle y^{1}}$ . Set the iteration counter ${\textstyle K=1}$ . Initialize the lower bound ${\textstyle Z_{L}=-\infty }$ , and the upper bound ${\textstyle Z_{U}=+\infty }$ .
Step 2: Solve the NLP subproblem for the fixed value ${\textstyle y^{k}}$ , to obtain the solution ${\textstyle x^{k}}$ and the multipliers ${\textstyle \lambda ^{k}}$ for the equations ${\textstyle h(x)=0.}$ $Z(y^{k})=min~~c^{T}y^{k}+f(x)$ $s.t.~~~h(x)=0$ $~~~~~~~~~g(x)\leq 0$ $~~~~~~~~~h(x)\leq 0$ $~~~~~~~~~Ax=a$ $~~~~~~~~~Cx\leq d-B~~y^{k}$ $~~~~~~~~~x\in X$ Step 3: Update the bounds and prepare the information for the master problem:
a. Update the current upper bound; if ${\textstyle Z(y^{k}) , set ${\textstyle Z_{U}=Z(y^{k}),~~y^{*}=y^{k},~~x^{*}=x^{k}.}$ b. Derive the integer cut, ${\textstyle IC^{k}}$ , to make infeasible the choice of the binary ${\textstyle y^{k}}$ from the subsequent iterations:

$IC^{K}={\big \{}\sum _{ieB^{K}}y_{i}-\sum _{ieN^{K}}y\leq \mid B^{K}\mid -1{\big \}}$ $where~~B^{k}={\big \{}i\mid y_{i}^{K}=1{\big \}},~~N^{k}={\big \{}i\mid y_{i}^{K}=0{\big \}}$ c. Define the diagonal direction matrix ${\textstyle T^{K}}$ for relaxing the equations into inequalities based on the sign of the multipliers ${\textstyle \lambda ^{k}}$ . The diagonal elements are given by:

$t_{jj}^{K}={\begin{cases}-1&if~~\lambda _{j}^{K}<0\\+1&if~~\lambda _{j}^{K}>0\\0&if~~\lambda _{j}^{K}=0\end{cases}}$ $where~~j=1,2...m$ d. Obtain the following linear outer-approximations for the nonlinear terms ${\textstyle f(x),~~h(x),~~g(x)}$ by performing first order linearizations at the point ${\textstyle x^{K}}$ :

$(w^{K})^{T}x-w_{c}^{K}=f(x^{K})+\bigtriangledown f(x^{K})^{T}(x-x^{T})$ $R^{K}x-r^{K}=h(x^{K})+\bigtriangledown h(x^{K})^{T}(x-x^{T})$ $S^{K}x-s^{K}=g(x^{K})+\bigtriangledown g(x^{K})^{T}(x-x^{T})$ Step 4: a. Solve the following MILP master problem:

$Z_{L}^{K}=min~~c^{T}y+\mu$ $s.t.~~~(w^{k})x-\mu \leq w_{c}^{k}$ $~~~~~~~~~T^{k}R^{k}x\leq T^{k}r^{k}~~~~~~~~~~~~k=1,2...K$ $~~~~~~~~~S^{k}x\leq s^{k}$ $~~~~~~~~~y\in IC^{k}$ $~~~~~~~~~By+Cx\leq d$ $~~~~~~~~~Ax=a$ $~~~~~~~~~Ey\leq e$ $~~~~~~~~~Z_{L}^{K-1}\leq c^{T}y+\mu \leq Z_{U}$ $~~~~~~~~~y\in {\big \{}0,1{\big \}}^{t}~~~~x\in X~~~~\mu \in R^{1}$ b. If the MILP master problem has no feasible solution, stop. The optimal solution is ${\textstyle x^{*},~~y^{*},~~Z_{U},}$ .
c. If the MILP master problem has a feasible solution, the new binary value ${\textstyle y^{K+1}}$ is obtained. Set ${\textstyle K=K+1}$ , return to step 2.

## Numerical Example

The following is a step-by-step solution for an MINLP optimization problem using Outer-Approximation method:

Problem
{\begin{aligned}\min &\quad f(x)=y_{1}+y_{2}+{\big (}x_{1}{\big )}^{2}+{\big (}x_{2}{\big )}^{2}\\s.t.&\quad {\big (}x_{1}-2{\big )}^{2}-x_{2}\leq 0\\&\quad x_{1}-2y_{1}\geq 0\\&\quad x_{1}-x_{2}-3{\big (}1-y_{1}{\big )}\geq 0\\&\quad x_{1}+y_{1}-1\geq 0\\&\quad x_{2}-y_{2}\geq 0\\&\quad x_{1}+x_{2}\geq 3y_{1}\\&\quad y_{1}+y_{2}\geq 1\\&\quad 0\leq x_{1}\leq 4\\&\quad 0\leq x_{2}\leq 4\\&\quad y_{1},y_{2}\in {\big \{}0,1{\big \}}\end{aligned}} Solution

Step 1: Select an initial value of the binary variables ${\textstyle y_{1}=y_{2}=1}$ . Set the iteration counter ${\textstyle K=1}$ . Initialize the lower bound ${\textstyle Z_{L}=-\infty }$ , and the upper bound ${\textstyle Z_{U}=+\infty }$ .

Step 2: Solve the NLP subproblem for the fixed value ${\textstyle y^{k}}$ , to obtain the solution ${\textstyle x^{k}}$ and the multipliers ${\textstyle \lambda ^{k}}$ for the equations ${\textstyle h(x)=0.}$ Iteration 1:

{\begin{aligned}\min &\quad f=2+{\big (}x_{1}{\big )}^{2}+{\big (}x_{2}{\big )}^{2}\\s.t.&\quad {\big (}x_{1}-2{\big )}^{2}-x_{2}\leq 0\\&\quad x_{1}-2\geq 0\\&\quad x_{1}-x_{2}\geq 0\\&\quad x_{1}\geq 0\\&\quad x_{2}-1\geq 0\\&\quad x_{1}+x_{2}\geq 3\\&\quad 0\leq x_{1}\leq 4\\&\quad 0\leq x_{2}\leq 4\end{aligned}} Solution: ${\textstyle x_{1}=2,x_{2}=1}$ , Upper Bound ${\textstyle Z_{U}=7}$ Step 3: Update the bounds and prepare the information for the master problem:
set ${\textstyle Z_{U}=7,~~y_{1}=y_{2}=1,~~x^{*}=[2,1].}$ Obtain the following linear outer-approximations for the nonlinear terms ${\textstyle f(x),~~g(x)}$ by performing first order linearizations at the point ${\textstyle x^{*}=[2,1]}$ :

$f{\big (}x{\big )}={\big (}x_{1}{\big )}^{2}+{\big (}x_{2}{\big )}^{2},~~\bigtriangledown f{\big (}x{\big )}=[2x_{1}~~~~2x_{1}]^{T}~~for~~x^{*}=[2~~~~1]^{T}$ $f{\big (}x^{*}{\big )}+\bigtriangledown f{\big (}x^{*}{\big )}^{T}{\big (}x-x^{*}{\big )}=5+[4~~~~2]{\begin{bmatrix}x_{1}-2\\x_{2}-1\end{bmatrix}}=5+4{\big (}x_{1}-2{\big )}+2{\big (}x_{2}-1{\big )}$ $g{\big (}x{\big )}={\big (}x_{1}-2{\big )}^{2}-x_{2},~~\bigtriangledown g{\big (}x{\big )}=[2x_{1}-4~~~~-1]^{T}~~for~~x^{*}=[2~~~~1]^{T}$ $g{\big (}x^{*}{\big )}+\bigtriangledown g{\big (}x^{*}{\big )}^{T}{\big (}x-x^{*}{\big )}=-1+[0~~~~-1]{\begin{bmatrix}x_{1}-2\\x_{2}-1\end{bmatrix}}=-x_{2}$ Step 4: Solve the MILP master problem with OA for ${\textstyle x^{*}=[2,1]}$ :

{\begin{aligned}\min &\quad \alpha \\s.t.&\quad \alpha \geq y_{1}+y_{2}+5+4{\big (}x_{1}-2{\big )}+2{\big (}x_{2}-1{\big )}\\&\quad -x_{2}\leq 0\\&\quad x_{1}-2y_{1}\geq 0\\&\quad x_{1}-x_{2}-3{\big (}1-y_{1}{\big )}\geq 0\\&\quad x_{1}+y_{1}-1\geq 0\\&\quad x_{2}-y_{2}\geq 0\\&\quad x_{1}+x_{2}\geq 3y_{1}\\&\quad y_{1}+y_{2}\geq 1\\&\quad 0\leq x_{1}\leq 4\\&\quad 0\leq x_{2}\leq 4\\&\quad y_{1},y_{2}\in {\big \{}0,1{\big \}}\end{aligned}} MILP Solution: ${\textstyle x_{1}=2,x_{2}=1,y_{1}=1,y_{2}=0}$ , Lower Bound ${\textstyle Z_{L}=6}$ Lower Bound < Upper Bound, Integer cut: ${\textstyle y_{1}-y_{2}\leq 0}$ Set iteration ${\textstyle K=K+1=2}$ , return to step 2.

Iteration 2:

Step 2: Start from ${\textstyle y=[1,0]}$ and solve the NLP below:

{\begin{aligned}\min &\quad f=1+{\big (}x_{1}{\big )}^{2}+{\big (}x_{2}{\big )}^{2}\\s.t.&\quad {\big (}x_{1}-2{\big )}^{2}-x_{2}\leq 0\\&\quad x_{1}-2\geq 0\\&\quad x_{1}-x_{2}\geq 0\\&\quad x_{1}\geq 0\\&\quad x_{2}\geq 0\\&\quad x_{1}+x_{2}\geq 3\\&\quad 0\leq x_{1}\leq 4\\&\quad 0\leq x_{2}\leq 4\end{aligned}} Solution: ${\textstyle x_{1}=2,x_{2}=1}$ , Upper Bound ${\textstyle Z_{U}=6}$ Upper Bound = 6 = Lower Bound, Optimum!

Optimal Solution for the MINLP: ${\textstyle x_{1}=2,x_{2}=1,y_{1}=1,y_{2}=0}$ ## Applications

The ability to accurately model real-world problems has made MINLP an active research area and there exists a vast number of applications in fields such as engineering, computational chemistry, and finance. The followings are some MINLP applications that can be solved by the Outer approximation method:

#### Process Synthesis

Process synthesis problems involving the selection of process units and their interconnection as well as the evaluation of the design and operating variables can be conceptually posed as large-scale MINLP problems, where 0-1 binary variables are used to denote the existence (or not) of process units, while continuous variables represent process design and operating values. For the case when multiple choices are possible, one can in fact develop valid linear outer-approximations that properly bound the nonconvex solution space in the MILP master problem.

#### Biological kinetic model

Consider a dynamic kinetic model describing the mechanism of a set of biochemical reactions. The goal is to determine the appropriate values of the model coefficients (e.g., rate constants, initial conditions, etc.), so as to minimize the sum-of-squares of the residuals between the simulated data provided by the model and the experimental observations. The problem solution strategy relies on reformulating the nonlinear dynamic optimization problem as a finite dimensional MINLP by applying a complete discretization using orthogonal collocation on finite elements. This MINLP is next solved using the outer approximation algorithm. 

#### Material selection

A thermal insulation system uses a series of heat intercepts and surrounding insulators to minimize the power required to maintain the heat intercepts at certain temperatures. The designer chooses the maximum number of intercepts, the thickness and area of each intercept, and the material of intercept from a discrete set of materials. The choice of material affects the thermal conductivity and total mass of the insulation system. Nonlinear functions in the model are required to accurately model system characteristics such as heat flow between the intercepts, thermal expansion, and stress constraints. Integer variables are used to model the discrete choice of the type of material to use in each layer. This problem can be modeled as MINLP and solved by outer approximation. 

## Conclusion

The outer approximation method is presented for solving MINLP problems of a particular class. Linearity of the integer (or discrete) variables, and convexity of the nonlinear functions involving continuous variables are the main features in the underlying mathematical structure. Based on principles of decomposition, outer-approximation and relaxation, the outer approximation algorithm effectively exploits the structure of the problems and consists of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a MILP master problem. It should be noted that although the outer approximation algorithm relies on convexity assumptions in MINLP, it can also be applied to nonconvex problems. Many of the real-world problems that can be solved through outer approximation involve process synthesis where binary decisions represent the existence of process units and continuous variables represent operating values. Examples of real-world problems illustrated here also include a dynamic biological kinetic model and material selection for thermal insulation.