Exponential transformation: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
m (Update section link)
 
(9 intermediate revisions by 2 users not shown)
Line 3: Line 3:
== Introduction ==
== Introduction ==


Exponential transformations are simple algebraic transformations of monomial functions through a variable substitution with an exponential variable. In computational optimization, exponential transformations are used for convexification of geometric programming constraints nonconvex optimization problems. The constraints for geometric programs are posynomial functions which are characterized by being positive polynomials.  
An '''exponential transformation''' is a simple algebraic transformation of a monomial function through variable substitution with an exponential variable. In computational optimization, exponential transformations are used for convexification of geometric programming constraints in nonconvex optimization problems.


Exponential transformation creates a convex function without changing the decision space of the problem. <ref name =":0"> D. Li and M. P. Biswal, [https://doi.org/10.1023/A:1021708412776 "Exponential Transformation in Convexifying a Noninferior Frontier and Exponential Generating Method]," ''Journal of Optimization Theory and Applications'', vol. 99, pp. 183–199, 1998.</ref> This is done through a simple substitution of continuous variables with a natural exponent and simplifying binary variables by removing the exponents due to the binary nature of the variable. The transformation can then be verified using the Hessian positive definite test to confirm that it is now a convex function.  
Exponential transformation creates a convex function without changing the decision space of the problem. <ref name =":0"> D. Li and M. P. Biswal, [https://doi.org/10.1023/A:1021708412776 "Exponential Transformation in Convexifying a Noninferior Frontier and Exponential Generating Method]," ''Journal of Optimization Theory and Applications'', vol. 99, pp. 183–199, 1998.</ref> This is done through a simple substitution of continuous variables with a natural exponent and simplification of binary variables through removal of the exponent. The transformation is verified to be convex if the Hessian, denoted by <math>H(x)</math>, is proven to be positive-definite.


Using exponential transformations, the overall time to solve a Nonlinear Programming (NLP) or a Mixed Integer Nonlinear Programming (MINLP) problem is reduced and simplifies the solution space enough to utilize conventional NLP/MINLP solvers. This can be seen used in various real world optimization problem applications to simplify the solution space as real world applications have extensive quantities of constraints and variables.  
By using exponential transformations, not only is the overall time to solve a Nonlinear Programming (NLP) or a Mixed Integer Nonlinear Programming (MINLP) problem reduced, but we can also simplify the solution space enough to utilize conventional NLP/MINLP solvers. Various real world optimization problems apply this transformation to simplify the solution space consisting of extensive quantities of constraints and variables.  


== Theory & Methodology ==
== Theory, Methodology, and Algorithmic Discussions ==
Exponential transformation is an algebraic transformation applied to geometric programs.
=== Theory ===
Exponential transformations are most commonly applied to geometric programs. A '''geometric program''' is a mathematical optimization problem where the objective function is a posynomial being minimized. '''Posynomial''' functions are defined as positive polynomials. <ref name =":1"> S. Boyd, S. J. Kim, and L. Vandenberghe ''et al.'', [https://doi.org/10.1007/s11081-007-9001-7 "A tutorial on geometric programming]," ''Optimization and Engineering'', vol. 8, article 67, 2007.</ref> 


In non linear programming an exponential transformation are used for geometric programs which are composed of posynomial functions in the optimization function and constraints.
The standard form of a geometric program is represented by:


A posynomial function which can also be referred to as a posynomial is defined as a positive polynomial function. <ref name =":1"> S. Boyd, S. J. Kim, and L. Vandenberghe ''et al.'', [https://doi.org/10.1007/s11081-007-9001-7, "A tutorial on geometric programming]," ''Optimization and Engineering'', vol. 8, article 67, 2007.</ref>
<math>\begin{align}
\min & \quad f_0(x) \\
s.t. & \quad f_i(x) \leq 1 \quad  i = 1,....,m  \\
& \quad g_i(x) = 1  \quad i = 1,....,p
\end{align}</math>


Exponential transformation begins with a posynomial (Positive and Polynomial) noncovex function of the form <ref name =":1"> S. Boyd, S. J. Kim, and L. Vandenberghe ''et al.'', [https://doi.org/10.1007/s11081-007-9001-7, "A tutorial on geometric programming]," ''Optimization and Engineering'', vol. 8, article 67, 2007.</ref>  
Where <math> f_0(x) </math> is a posynomial function, <math> f_i(x) </math>  is a posynomial function and <math> g_i(x) </math> is a monomial function.


A posynomial begins with <math> x_1,...,x_n </math>where <math> x_n </math> are real non negative variables.  
To verify a geometric program is represented in its standard form, the following conditions must be true:
# Objective function <math>f_0(x)</math> is a posynomial.
# Inequality constraints <math>f_i(x)</math> must be posynomials less than or equal to 1.
# Equality constraints <math>g_i(x)</math> must be monomials equal to 1.


<math> f(x) = \sum_{k=1}^N c_k{x_1}^{a_{1k}}{x_2}^{a_{2k}}....{x_n}^{a_{nk}} </math>
In this definition, monomials differ from the usual algebraic definition where the exponents must be nonnegative integers. For this application, exponents can be any positive number inclusive of fractions and negative exponents. <ref name =":1"></ref>


where <math> c_k \geq 0 </math> and <math> x_n \geq0 </math>
=== Methodology ===


A transformation where <math> x_n </math> is replaced with the natural logarithm base exponential <math> e^u_i </math> <ref> I. E. Grossmann, [https://doi.org/10.1023/A:1021039126272, "Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques]," ''Optimization and Engineering'', vol. 3, pp. 227–252, 2002. </ref> where u corresponds to the n variable for each instance
Exponential transformation begins with a posynomial noncovex function as depicted below. <ref name =":1"></ref> A posynomial begins with <math> x_1,...,x_n </math> where <math>x_n</math> are real non-negative variables.


The transformed function after substitution is presented as:
<math> f(x) = \sum_{k=1}^N c_k{x_1}^{a_{1k}}{x_2}^{a_{2k}}....{x_n}^{a_{nk}} </math>


<math> f(u) = \sum_{k=1}^N c_k{{e}^{{u_1}{a_{1k}}}{e}^{{u_2}{a_{2k}}}....{e}^{{u_n}{a_{nk}}}} </math>
Where <math> c_k \geq 0 </math> and <math> x_n \geq0 </math>


Properties of exponent can be used to further simplify the transformation above resulting in the sum of the exponents with a natural logarithm base.  
A transformation is applied in which <math> x_n </math> is replaced with the natural logarithm base exponential <math> e^{u_n} </math>. <ref> I. E. Grossmann, [https://doi.org/10.1023/A:1021039126272 "Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques]," ''Optimization and Engineering'', vol. 3, pp. 227–252, 2002. </ref> The transformed function after substitution is presented as:


<math> f(u) = \sum_{k=1}^N c_k{{e}^{{{u_1}{a_{1k}}} + {{u_2}{a_{2k}}}+....+{{u_n}{a_{nk}}}}} </math>
<math> f(u) = \sum_{k=1}^N c_k{{e}^{{u_1}{a_{1k}}}{e}^{{u_2}{a_{2k}}}....{e}^{{u_n}{a_{nk}}}} </math>


This simplification can be applied in any instance where the product of logarithms with the same base is present to simplify the transformed function.  
Properties of the exponent can be used to further simplify the transformation above, resulting in the sum of the exponents with a natural logarithm base.  


Exponential transformation can be applied to geometric programs. A geometric program is a mathematical optimization problem where the objective function is a posynomial which is minimized. For Geometric programs in standard form the objective must be a posynomial and the constraints must be posynomials <math> \leq 1 </math> or monomials equal to 1.
<math> f(u) = \sum_{k=1}^N c_k{{e}^{{{u_1}{a_{1k}}}+{{u_2}{a_{2k}}}....+{{u_n}{a_{nk}}}}} </math>


Geometric Programs in standard form is represented by:
In order to prove the convexity of the final transformed function, the positive-definite test of the Hessian is used as defined in ''Optimization of Chemical Processes'' <ref> T. F. Edgar, D. M. Himmelblau, and L. S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.</ref>. The Hessian is defined as the following:


<math>\begin{align}  
<math> H(x) = H = \nabla^2f(x) =
\min & \quad f_0(x) \\
\begin{bmatrix}  
s.t. & \quad f_i(x) \leq 1 & i = 1,....,m  \\
{\partial^2 f \over \partial {x_1^2}} & {\partial^2 f \over \partial x_1\partial x_2} \\
& \quad g_i(x) = 1  & i = 1,....,p
{\partial^2 f \over \partial x_2\partial x_1} & {\partial^2 f \over \partial {x_2^2}}
\end{align}</math>
\end{bmatrix}
</math>


Where <math> f_0(x) </math> is a posynomial function, <math> f_i(x) </math>  is a posynomial function and <math> g_i(x) </math> is a monomial function.
to test that


In this definition monomials differ from the usual algebraic definition where the exponents must be nonnegative integers. For this application exponents can be any positive number inclusive of fractions and negative exponents. <ref name =":1"> S. Boyd, S. J. Kim, and L. Vandenberghe ''et al.'', [https://doi.org/10.1007/s11081-007-9001-7, "A tutorial on geometric programming]," ''Optimization and Engineering'', vol. 8, article 67, 2007.</ref>  
<math>  Q(x)\geq 0 </math> where <math>  Q(x) = x^THx </math> for all <math> x \neq 0</math>.


===Exponential Transformation in Computational Optimization===
===Exponential Transformation in Computational Optimization===


Exponential transformation can be used for convexification of any Geometric MINLP that meets the criteria for a geometric program. In a geometric program the approach to solving efficiently is taken by transforming the optimization problem into a convex nonlinear problem. This is done by turning the problem into a nonlinear convex optimization problem through exponential transformation. Using the exponential substitution detailed above, all continuous variables in the function are transformed while binary variables are not transformed. Through exponential transformation the constraints of a geometric program are also convex.
Exponential transformation can be used for convexification of any MINLP that meets the criteria for a geometric program. Using the exponential substitution detailed above, all continuous variables in the function are transformed while binary variables are not transformed. This transformation can also be applied to constraints to ensure convexification throughout the entire problem.


In a special case binary exponential transformation can also be applied where binary variables are linearized since their possible allocation is 0 or 1. Binary exponential transformation can be done by using the following replacement <math> {y^n} is substituted by y </math>  
In a special case, binary exponential transformation can also be applied where binary variables are linearized. Binary exponential transformation can be done by using the following replacement: <math> {y^n} </math> is substituted by <math>y</math>


Additionally, as presented in Theorem 1 and accompanying proof in "Global optimization of signomial geometric programming using linear relaxation" by P. Shen, K. Zhang, given that a function is being minimized, it shows that after transformation all points on the transformed function are feasible in the original function and all objective values in the transformed function are the same or less than the original function. <ref name =":1"></ref> this creates a convex under estimator approach to the problem.
Additionally, all points on the transformed function are feasible in the original function, and all objective values in the transformed function are the same or less than the original function. <ref>P. Shen and K. Zhang, [https://doi.org/10.1016/S0096-3003(03)00200-5 "Global optimization of signomial geometric programming using linear relaxation]," ''Applied Mathematics and Computation'', vol. 150, issue 1, pp. 99-114, 2004. </ref> This creates a convex under estimator approach to the problem. Note that the bounds of the problem are not altered through exponential transformation. <ref name=":0"></ref>


Also as presented by Li and Biswal, the bounds of the problem are not altered through exponential transformation. <ref name=":0"></ref>
== Numerical Example ==
To provide an example, we begin with a simple nonconvex problem:


== Numerical Example ==
<math> {\frac{x_1^3}{x_2^4}} + {x_1^2} + {\sqrt[3]{x_2^2}} \leq 4</math>
<math> {\frac{x_1^3}{x_2^4}} + {x_1^2} + {\sqrt[3]{x_2^2}} \leq 4</math>


Reformulating to exponents
 
'''Step 1:''' Convert the problem into standard form by reformulating radicals, fractions, etc. into exponents.


<math> {x_1^3}*{x_2^{-4}} + {x_1^2} + {x_2^{\frac{2}{3}}} \leq 4 </math>
<math> {x_1^3}*{x_2^{-4}} + {x_1^2} + {x_2^{\frac{2}{3}}} \leq 4 </math>


Substituting <math> x_1 = e^{u_1}, x_2 = e^{u_2} </math>
'''Step 2:''' Substitute all instances of <math>x_n</math> with <math>e^{u_n}</math>.


<math>{e^{{3}{u_1}}}*{e^{{-4}{u_2}}} + {e^{{2}{u_1}}} + {{e^{u_2}}^{\frac{2}{3}}} \leq 4 </math>
<math>{e^{{3}{u_1}}}*{e^{{-4}{u_2}}} + {e^{{2}{u_1}}} + {{e^{{\frac{2}{3}}u_2}}} \leq 4 </math>


Simplifying by exponent properties
'''Step 3:''' Simplify by applying exponent properties.


<math> {e^{3{u_1} - 4{u_2}}} + {e^{{2}{u_1}}} + {{e^{{\frac{2}{3}}{u_2}}}} \leq 4 </math>
<math> {e^{3{u_1} - 4{u_2}}} + {e^{{2}{u_1}}} + {{e^{{\frac{2}{3}}{u_2}}}} \leq 4 </math>


===Example of Convexification Application in MINLP ===
===Example of Convexification in MINLP ===


The following MINLP problem can take a convexification approach using exponential transformation:
The following MINLP problem can take a convexification approach using exponential transformation.


<math>\begin{align}
<math>\begin{align}
\min & \quad Z = 5{x_1^2}{x_2^8} + 2{x_1} + \frac{x_2^3} + 5{y_1} + 2 {y_2^2} \\
\min & \quad Z = 5{x_1^2}{x_2^8} + 2{x_1} + {x_2^3} + 5{y_1} + 2 {y_2^2} \\
s.t. & \quad {x_1} \leq 7{x_2^{0.2}} \\
s.t. & \quad {x_1} \leq 7{x_2^{0.2}} \\
& \quad 2{x_1^3} - y_1^2 \leq 1 \\
& \quad 2{x_1^3} - y_1^2 \leq 1 \\
& \quad x_1 \geq 0 \\  
& \quad x_1 \geq 0 \\  
& \quad x_2 \leq 4 \\
& \quad x_2 \leq 4 \\
& \quad y_1 = 0,1 \quad y_2 = 0,1
& \quad y_1 \isin \left \{ 0,1 \right \} \quad y_2 \isin \left \{ 0,1 \right \}
\end{align} </math>  
\end{align} </math>  


Using the exponential transformation to continuous variables <math>x_1, x_2 </math> by substituting <math> x_1 = e^{u_1}</math> and <math> x_2 = e^{u_2} </math> described the problem becomes the following:
 
'''Step 1:''' Apply the exponential transformation to continuous variables <math>x_1</math> and <math>x_2</math> by substituting <math>x_1 = e^{u_1}</math> and <math>x_2 = e^{u_2} </math>.


<math>\begin{align}
<math>\begin{align}
\min & \quad Z = 5{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} + \frac{e^{{u_2}{3}}} + 5{y_1} + 2 {y_2^2} \\
\min & \quad Z = 5{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} + {e^{{3}{u_2}}} + 5{y_1} + 2 {y_2^2} \\
s.t. & \quad {e^{u_1}} \leq 7{e^{0.2{u_2}}} \\
s.t. & \quad {e^{u_1}} \leq 7{e^{0.2{u_2}}} \\
& \quad 2{e^{3{u_1}}} - y_1^2 \leq 1 \\
& \quad 2{e^{3{u_1}}} - y_1^2 \leq 1 \\
& \quad e^{u_1} \geq 0 \\
& \quad e^{u_1} \geq 0 \\
& \quad e^{u_2} \leq 4 \\
& \quad e^{u_2} \leq 4 \\
& \quad y_1 = 0,1 \quad y_2 = 0,1
& \quad y_1 \isin \left \{ 0,1 \right \} \quad y_2 \isin \left \{ 0,1 \right \}
\end{align}</math>
\end{align}</math>


With additional logarithmic simplification through properties of natural logarithm:
'''Step 2:''' Simplify using the properties of exponents. (i.e. Combining the products of exponential terms as the sum of exponents with the same base)


<math>\begin{align}
<math>\begin{align}
\min & \quad Z = 5{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} + \frac{e^{{u_2}{3}}} + 5{y_1} + 2 {y_2^2}  \\
\min & \quad Z = 5{e^{2{u_1}+{8{u_2}}}} + 2{e^{u_1}}+{e^{{2u_2}+{3}{u_2}}} + 5{y_1} + 2 {y_2^2}  \\
s.t. & \quad u_1 \leq \ln 7 + 0.2{u_2} \\
s.t. & \quad u_1 \leq \ln 7 + 0.2{u_2} \\
& \quad 2{e^{3{u_1}}} - y_1^2 \leq 1 \\
& \quad 2{e^{3{u_1}}} - y_1^2 \leq 1 \\
& \quad {u_2} \leq \ln 4 \\
& \quad {u_2} \leq \ln 4 \\
& \quad y_1 = 0,1 \quad y_2 = 0,1
& \quad y_1 \isin \left \{ 0,1 \right \} \quad y_2 \isin \left \{ 0,1 \right \}
\end{align}</math>
\end{align}</math>


Where <math> u_1 </math> is unbounded due to logarithmic of 0 being indefinite.
Where <math> u_1 </math> is unbounded due to logarithmic of 0 being indefinite.


Additionally simplifying further for binary variables by substituting <math> {y_1}^2 with {y_1} and {y_2}^2 with {y_2} </math> since <math> {y_2} </math> is either 0 or 1 and any exponents on the variable will not change the solution space:
'''Step 3:''' Simplify binary variables by substituting <math>{y_1}^2</math> with <math>{y_1}</math> and <math>{y_2}^2</math> with <math>{y_2} </math>.


<math>\begin{align}
<math>\begin{align}
\min & \quad Z = 5{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} + \frac{e^{{u_2}{3}}} + 5{y_1} + 2 {y_2}  \\
\min & \quad Z = 5{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} + {e^{{u_2}{3}}} + 5{y_1} + 2 {y_2}  \\
s.t. & \quad u_1 \leq \ln 7 + 0.2{u_2} \\
s.t. & \quad u_1 \leq \ln 7 + 0.2{u_2} \\
& \quad 2{e^{3{u_1}}} - y_1 \leq 1 \\
& \quad 2{e^{3{u_1}}} - y_1 \leq 1 \\
& \quad {u_2} \leq \ln 4 \\
& \quad {u_2} \leq \ln 4 \\
& \quad y_1 = 0,1 \quad y_2 = 0,1
& \quad y_1 \isin \left \{ 0,1 \right \} \quad y_2 \isin \left \{ 0,1 \right \}
\end{align}</math>
\end{align}</math>


The transformed objective function can be shown to be convex through the positive-definite test of the Hessian. In order to prove the convexity of the transformed functions, the positive definite test of Hessian is used as defined in ''Optimization of Chemical Processes'' <ref> T. F. Edgar, D. M. Himmelblau, and L. S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.</ref>. This tests the Hessian defined as:
==== Convexity Check ====
For the example above, the Hessian of the transformed matrix is as follows: <ref> M. Chiang, [https://www.princeton.edu/~chiangm/gp.pdf "Geometric Programming for Communication Systems]," 2005. </ref>


<math>  H(x) = H = \nabla^2f(x)</math>
<math>\begin{bmatrix}
{\partial^2 Z(u) \over \partial u_1^2} &  {\partial^2 Z(u) \over \partial u_1\partial u_2} \\
{\partial^2 Z(u) \over \partial u_2\partial u_1}  & {\partial^2 Z(u) \over \partial u_2^2} \\
\end{bmatrix}</math>


to test that


<math>  Q(x)\geq 0 </math>
'''Step 1:''' Solve for each partial derivative.


where
<math>\begin{align}
{\partial Z(u) \over \partial u_1} = 10{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} \qquad & {\partial Z(u) \over \partial u_2} = 40{e^{2{u_1}}}{e^{8{u_2}}} + 4{e^{u_1}}{e^{2u_2}} + 3{e^{3u_2}} \\
{\partial^2 Z(u) \over \partial u_1^2} = 20{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} \qquad & {\partial^2 Z(u) \over \partial u_2^2} = 320{e^{2{u_1}}}{e^{8{u_2}}} + 8{e^{u_1}}{e^{2u_2}} + 9{e^{3u_2}} \\
{\partial^2 Z(u) \over \partial u_1\partial u_2} = 80{e^{2{u_1}}}{e^{8{u_2}}} + 4{e^{u_1}}{e^{2u_2}} \qquad & {\partial^2 Z(u) \over \partial u_2\partial u_1} = 80{e^{2{u_1}}}{e^{8{u_2}}} + 4{e^{u_1}}{e^{2u_2}}
\end{align}</math>


<math>  Q(x) = x^THx </math>


for all <math> x \neq 0 </math>
'''Step 2:''' Construct Hessian matrix, <math>H(x)</math>, from second derivatives.


For the example above, the Hessian is as follows <ref> M. Chiang, [https://www.princeton.edu/~chiangm/gp.pdf "Geometric Programming for Communication Systems]," 2005. </ref>:
<math>H(x) =
\begin{bmatrix}
20{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} \qquad & 80{e^{2{u_1}}}{e^{8{u_2}}} + 4{e^{u_1}}{e^{2u_2}} \\
80{e^{2{u_1}}}{e^{8{u_2}}} + 4{e^{u_1}}{e^{2u_2}} \qquad & 320{e^{2{u_1}}}{e^{8{u_2}}} + 8{e^{u_1}}{e^{2u_2}} + 9{e^{3u_2}}
\end{bmatrix}</math>


<math>
\begin{bmatrix}
{\partial^2 f \over \partial {x_1^2}} & {\partial^2 f \over \partial x_1\partial x_2} \\
{\partial^2 f \over \partial x_2\partial x_1} & {\partial^2 f \over \partial {x_2^2}}
\end{bmatrix}
</math>
<math>\begin{align}
{\partial Z(u) \over \partial u_1} & = 10{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}}  & \quad{\partial^2 Z(u) \over \partial u_1^2} = 20{e^{2{u_1}}}{e^{8{u_2}}} + 2{e^{u_1}}{e^{2u_2}} \\
{\partial Z(u) \over \partial u_2} & = 40{e^{2{u_1}}}{e^{8{u_2}}} + 4{e^{u_1}}{e^{2u_2}} + 3{e^{{3}{u_2}}}  & \quad{\partial^2 Z(u) \over \partial u_2^2} = 320{e^{2{u_1}}}{e^{8{u_2}}} + 8{e^{u_1}}{e^{2u_2}} + 9{e^{{3}{u_2}}} \\
\end{align}</math>
In the example above, the Hessian is defined as:
<math>
\begin{bmatrix}
X & X \\
X & Y
\end{bmatrix}
</math>


Therefore, H(x) is positive-definite and strictly convex.  
Because the second derivatives consist only of exponential equations and the exponential expression <math>e^x</math> is convex everywhere, <math>H(x)</math> is proven to be positive-definite and strictly convex.  


==Applications==
==Applications==


Currently, various applications of exponential transformation can be seen in published journal articles and industry practices. Many of these applications use exponential transformation to convexify their problem space. Due to the closeness with logarithmic transformation, usually a combination of the approaches is used in practical solutions.   
Currently, various applications of exponential transformation can be seen in several published journal articles and industry practices. Many of these applications use exponential transformation to convexify their problem space. Due to the similarities between exponential and logarithmic transformations, a combination of both approaches is typically used in practical solutions.   


=== Mechanical Engineering Applications ===  
=== Mechanical Engineering Applications ===  
In the paper “Global optimization of heat exchanger network synthesis problems with and without the isothermal mixing assumption” a global optimization approach is explored for the synthesis of heat exchanger networks. As seen in eq(34) and (35) of the work by Björk and Westerlund, they employ an exponential transformation to convexify their optimization problem to employ a global optimization approach. <ref>K. J. Björk and T. Westerlund, [https://doi.org/10.1016/S0098-1354(02)00129-1, "Global optimization of heat exchanger network synthesis problems with and without the isothermal mixing assumption]," ''Computers & Chemical Engineering'', vol. 26, issue 11, pp. 1581-1593, 2002.</ref>
A global optimization approach is explored for the synthesis of heat exchanger networks. As seen in equations (34) and (35) of the paper by Björk and Westerlund, they employ an exponential transformation to convexify their optimization problem. <ref>K. J. Björk and T. Westerlund, [https://doi.org/10.1016/S0098-1354(02)00129-1 "Global optimization of heat exchanger network synthesis problems with and without the isothermal mixing assumption]," ''Computers & Chemical Engineering'', vol. 26, issue 11, pp. 1581-1593, 2002.</ref>
=== Electrical Engineering Application: ===
=== Electrical Engineering Application ===
Applications for VLSI circuit performance optimization. In this application a special geometric program defined as unary geometric programs is presented. The unary geometric program is a posynomial as defined in the [[#Theory & Methodology|Theory & Methodology]] section. The unary geometric program is derived through a greedy algorithm which implements a logarithmic transformation within lemma 5. <ref> http://home.eng.iastate.edu/~cnchu/pubs/j08.pdf </ref> While this is not a specific exponential transformation example logarithmic transformations are within the same family and can also be used to convexify geometric programs.  
In the optimization of VLSI circuit performance, a special geometric program defined as a '''unary geometric program''' is presented. The unary geometric program is a posynomial as defined in the [[#Theory, Methodology, and Algorithmic Discussions|Theory, Methodology, and Algorithmic Discussions]] section. The unary geometric program is derived through a greedy algorithm which implements a logarithmic transformation within lemma 5. <ref>C. Chu and D. F. Wong, [http://home.eng.iastate.edu/~cnchu/pubs/j08.pdf "VLSI Circuit Performance Optimization by Geometric Programming]," ''Annals of Operations Research'', vol. 105, pp. 37-60, 2001.</ref> While this is not a specific exponential transformation example, logarithmic transformations are within the same family and can also be used to convexify geometric programs.  
=== Machining Economics: ===
=== Machining Economics ===
Applications in economics can be seen through geometric programming approaches. Examples and applications include to analyze the life of cutting tools in machining. In this approach they use exponential transformations to convexify the problem.
Applications in economics can be seen through geometric programming approaches. Examples and applications include analyzing the life of cutting tools in machining. In this approach, exponential transformations are used to convexify the problem. <ref>T. R. Jefferson and C. H. Scott, [https://link.springer.com/article/10.1007%2FBF02591746 "Quadratic geometric programming with application to machining economics]," ''Mathematical Programming'', vol. 31, pp. 137-152, 1985.</ref>
<ref> https://link.springer.com/content/pdf/10.1007/BF02591746.pdf </ref>


Overall exponential transformations can be applied anywhere a geometric programming approach is taken to optimize the solution space. Some Applications may perform a logarithmic transformation instead of an exponential transformation.  
Overall, exponential transformations can be applied anywhere a geometric programming approach is taken to optimize the solution space. Some applications may perform a logarithmic transformation instead of an exponential transformation.  


==Conclusion==
==Conclusion==
Exponential transformation is a useful method to convexify Geometric MINLP and obtain a global solution to the problem. Exponential transformation does not alter the bounds of the problem and allows for a convex objective function and constraints given that the conditions described within [[#Theory & Methodology|Theory & Methodology]] are satisfied. Geometric Programming transformation can be further explored through logarithmic transformation to address convexification.
Exponential transformation is a useful method to convexify geometric MINLP and obtain a global solution to the problem. Exponential transformation does not alter the bounds of the problem and allows for a convex objective function and constraints given that the prerequisite conditions described are satisfied. Geometric programming transformation can be further explored through logarithmic transformation to address convexification.


==References==
==References==
<references />
<references />

Latest revision as of 23:06, 14 December 2021

Author: Daphne Duvivier, Daniela Gil, Jacqueline Jackson, Sinclaire Mills, Vanessa Nobre (SYSEN 5800, Fall 2021)

Introduction

An exponential transformation is a simple algebraic transformation of a monomial function through variable substitution with an exponential variable. In computational optimization, exponential transformations are used for convexification of geometric programming constraints in nonconvex optimization problems.

Exponential transformation creates a convex function without changing the decision space of the problem. [1] This is done through a simple substitution of continuous variables with a natural exponent and simplification of binary variables through removal of the exponent. The transformation is verified to be convex if the Hessian, denoted by , is proven to be positive-definite.

By using exponential transformations, not only is the overall time to solve a Nonlinear Programming (NLP) or a Mixed Integer Nonlinear Programming (MINLP) problem reduced, but we can also simplify the solution space enough to utilize conventional NLP/MINLP solvers. Various real world optimization problems apply this transformation to simplify the solution space consisting of extensive quantities of constraints and variables.

Theory, Methodology, and Algorithmic Discussions

Theory

Exponential transformations are most commonly applied to geometric programs. A geometric program is a mathematical optimization problem where the objective function is a posynomial being minimized. Posynomial functions are defined as positive polynomials. [2]

The standard form of a geometric program is represented by:

Where is a posynomial function, is a posynomial function and is a monomial function.

To verify a geometric program is represented in its standard form, the following conditions must be true:

  1. Objective function is a posynomial.
  2. Inequality constraints must be posynomials less than or equal to 1.
  3. Equality constraints must be monomials equal to 1.

In this definition, monomials differ from the usual algebraic definition where the exponents must be nonnegative integers. For this application, exponents can be any positive number inclusive of fractions and negative exponents. [2]

Methodology

Exponential transformation begins with a posynomial noncovex function as depicted below. [2] A posynomial begins with where  are real non-negative variables.

Where and

A transformation is applied in which is replaced with the natural logarithm base exponential . [3] The transformed function after substitution is presented as:

Properties of the exponent can be used to further simplify the transformation above, resulting in the sum of the exponents with a natural logarithm base.

In order to prove the convexity of the final transformed function, the positive-definite test of the Hessian is used as defined in Optimization of Chemical Processes [4]. The Hessian is defined as the following:

to test that

where for all .

Exponential Transformation in Computational Optimization

Exponential transformation can be used for convexification of any MINLP that meets the criteria for a geometric program. Using the exponential substitution detailed above, all continuous variables in the function are transformed while binary variables are not transformed. This transformation can also be applied to constraints to ensure convexification throughout the entire problem.

In a special case, binary exponential transformation can also be applied where binary variables are linearized. Binary exponential transformation can be done by using the following replacement: is substituted by .

Additionally, all points on the transformed function are feasible in the original function, and all objective values in the transformed function are the same or less than the original function. [5] This creates a convex under estimator approach to the problem. Note that the bounds of the problem are not altered through exponential transformation. [1]

Numerical Example

To provide an example, we begin with a simple nonconvex problem:


Step 1: Convert the problem into standard form by reformulating radicals, fractions, etc. into exponents.

Step 2: Substitute all instances of with .

Step 3: Simplify by applying exponent properties.

Example of Convexification in MINLP

The following MINLP problem can take a convexification approach using exponential transformation.


Step 1: Apply the exponential transformation to continuous variables and by substituting and .

Step 2: Simplify using the properties of exponents. (i.e. Combining the products of exponential terms as the sum of exponents with the same base)

Where is unbounded due to logarithmic of 0 being indefinite.

Step 3: Simplify binary variables by substituting with and with .

Convexity Check

For the example above, the Hessian of the transformed matrix is as follows: [6]


Step 1: Solve for each partial derivative.


Step 2: Construct Hessian matrix, , from second derivatives.


Because the second derivatives consist only of exponential equations and the exponential expression is convex everywhere, is proven to be positive-definite and strictly convex.

Applications

Currently, various applications of exponential transformation can be seen in several published journal articles and industry practices. Many of these applications use exponential transformation to convexify their problem space. Due to the similarities between exponential and logarithmic transformations, a combination of both approaches is typically used in practical solutions.

Mechanical Engineering Applications

A global optimization approach is explored for the synthesis of heat exchanger networks. As seen in equations (34) and (35) of the paper by Björk and Westerlund, they employ an exponential transformation to convexify their optimization problem. [7]

Electrical Engineering Application

In the optimization of VLSI circuit performance, a special geometric program defined as a unary geometric program is presented. The unary geometric program is a posynomial as defined in the Theory, Methodology, and Algorithmic Discussions section. The unary geometric program is derived through a greedy algorithm which implements a logarithmic transformation within lemma 5. [8] While this is not a specific exponential transformation example, logarithmic transformations are within the same family and can also be used to convexify geometric programs.

Machining Economics

Applications in economics can be seen through geometric programming approaches. Examples and applications include analyzing the life of cutting tools in machining. In this approach, exponential transformations are used to convexify the problem. [9]

Overall, exponential transformations can be applied anywhere a geometric programming approach is taken to optimize the solution space. Some applications may perform a logarithmic transformation instead of an exponential transformation.

Conclusion

Exponential transformation is a useful method to convexify geometric MINLP and obtain a global solution to the problem. Exponential transformation does not alter the bounds of the problem and allows for a convex objective function and constraints given that the prerequisite conditions described are satisfied. Geometric programming transformation can be further explored through logarithmic transformation to address convexification.

References

  1. 1.0 1.1 D. Li and M. P. Biswal, "Exponential Transformation in Convexifying a Noninferior Frontier and Exponential Generating Method," Journal of Optimization Theory and Applications, vol. 99, pp. 183–199, 1998.
  2. 2.0 2.1 2.2 S. Boyd, S. J. Kim, and L. Vandenberghe et al., "A tutorial on geometric programming," Optimization and Engineering, vol. 8, article 67, 2007.
  3. I. E. Grossmann, "Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques," Optimization and Engineering, vol. 3, pp. 227–252, 2002.
  4. T. F. Edgar, D. M. Himmelblau, and L. S. Lasdon, Optimization of Chemical Processes, McGraw-Hill, 2001.
  5. P. Shen and K. Zhang, "Global optimization of signomial geometric programming using linear relaxation," Applied Mathematics and Computation, vol. 150, issue 1, pp. 99-114, 2004.
  6. M. Chiang, "Geometric Programming for Communication Systems," 2005.
  7. K. J. Björk and T. Westerlund, "Global optimization of heat exchanger network synthesis problems with and without the isothermal mixing assumption," Computers & Chemical Engineering, vol. 26, issue 11, pp. 1581-1593, 2002.
  8. C. Chu and D. F. Wong, "VLSI Circuit Performance Optimization by Geometric Programming," Annals of Operations Research, vol. 105, pp. 37-60, 2001.
  9. T. R. Jefferson and C. H. Scott, "Quadratic geometric programming with application to machining economics," Mathematical Programming, vol. 31, pp. 137-152, 1985.