Logarithmic transformation

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

This web page is a duplicate of https://optimization.mccormick.northwestern.edu/index.php/Logarithmic_transformation

Author: Hassan Ali (ChE 345 Spring 2015)

Steward: Dajun Yue, Fengqi You

Logarithmic transformation is a method used to change geometric programs into their convex forms. A geometric program, or GP, is a type of global optimization problem that concerns minimizing a subject to constraint functions so as to allow one to solve unique non-linear programming problems. All geometric programs contain functions called posynomials that are inherently non-convex. Due to this fact, solving geometric problems can be computationally intensive and finding a global optimum solution is not guaranteed. However, by creating a logarithmic transformation for a problem, one can solve for the globally optimum solution quicker and easier. A logarithmic transformation is not the only transformation which allows. One can also use an exponential transformation to obtain the same result. A logarithmic transformation can also be used on signomial programs, which are an extension of geometric programs.1,2,3

Background

Geometric Programming

Geometric programming was first discussed in the early 1960's as a class of optimization problem that could be solved with geometric inequalities. Duffin and his colleagues went on to formulate basic theories of geometric programming and its applications in 1967 in their seminal textbook Geometric Programming Theory and Application.4 One of their insights into geometric programming was noticing that problems with highly non-linear constraints could be stated equivalently with a dual program. In other words, a global minimizing solution could be found by solving its corresponding dual maximization problem. This is because the dual constraints are linear. All one would need to do is change the geometric programming problem from its standard posynomial form into this "dual form" and solve using these now linear constraints.3 The answer could only be locally optimum as one was still dealing with non-linear programming (NLP) methods.

A standard GP problem takes the following form

where are posynomials and are monomials, and the variables x are all positive.


Posynomials

Posynomials are functions that are non-linear and consist of a set of constants multiplied to a series of multiple variables multiplied together. Often these variables are raised to various powers. More specifically, they are functions of the form:

where the variables and the coefficients are positive, real numbers, and all of the exponents are real numbers.4

Monomials are a specific case of posynomial where there is no set of terms but rather only one term and its constant. A posynomial is then simply a sum of monomials. An example of a monomial is

and an example of a posynomial is

A posynomial is therefore not of the form

as this is a linear function.


Geometric Programming Derivation

GP problems are not always given and sometimes must be derived.5 Take the following example of a standard NLP problem

where the variables x,y, and z are positive. The standard GP form of this problem would be

A standard GP problem then minimizes a posynomial function while constraining it to posynomial inequality constraints and monomial equality constraints. Posynomials are positive functions and contain log convexity. This is an important facet as it allows standard GP problems to undergo log transformations.1

Logarithmic Transformation

The Duffin "dual-program" method for solving GP problems is still in use but as logarithmic and exponential transformation methods were understood it became easier to simply use them to change the standard GP problem into a convex GP problem and solve using an interior point method. Interior point methods can solve GP problems very fast and robust as they require essentially no tuning of the parameters. Most importantly, the final solution obtained through this method is guaranteed to be the global optimum solution. Solving a standard GP problem is like solving an NLP problem. The only difference being that a GP problem is more constrained in its use of posynomials and monomials in its objective function and constraints. This makes standard GP problems more efficient to solve and they can be solved relatively quickly, but like NLP problems there is no guarantee the solution is globally optimum. In addition, an initial guess needs to be provided and parameters must be carefully chosen.5

Logarithmic transformations are therefore very helpful as they transform the standard GP form into its convex form based on a logarithmic change of variables and a logarithmic transformation of the objective and constraint functions. So instead of minimizing the objective , the logarithm of is minimized. The variable x is replaced with the its logarithm . So now . The inequality constraint is now instead of and the equality constraint is now instead of .5

Therefore the convex form of a GP problem is

where are posynomials and are monomials, and the variables y are all positive. This reformulation can occur as long as the constants in the function are positive.


Methods

The log-sum-exp function in .

To see the transformation clearer, more detailed steps can be shown. If the objective function is a monomial of the format

then the new objective function will be a function of a new variable y where and . It will also be a logarithm of the original objective function so

where

which is an affine function of y

If the above objective function was instead a monomial equality constraint such that

then the new equality constraint would be equal to zero and be simplified to a linear equation of the form


Any GP with only monomials reduces to a linear program after a logarithmic transformation. Checking for linearity in a monomial case will confirm if the transformation was done correctly.5


If the GP has posynomials the problem becomes more complex. The log of multiple terms cannot be simplified beyond its logarithm form. So if the objective function is

where c > 0, then the new objective function will be

where

and where for . The above can be written in a simpler form as

where A is an N by n matrix with rows and lse is the log-sum-exp function which is convex.6


The reason the logarithmic transformation is done when an exponential transformation can be done in less steps is that the exponential function might take in large values. This may create numerical problems as well as lead to complications in optimization programs. Taking a logarithm allows for the recovery of smaller values, which gives the logarithmic transformation a distinct advantage.6

Examples

A few objective functions and constraints are provided in this section as illustrative examples.

EXAMPLE 1


After a logarithmic transformation, this GP becomes:


EXAMPLE 2

After a logarithmic transformation, this GP becomes:


EXAMPLE 3

After a logarithmic transformation, this GP becomes:


To prove that the answers to the above examples are indeed convex, one could verify its convexity through a positive-definiteness test of the Hessian. The reformulations can also be tested in GAMS.

Feasibility Analysis

A standard GP problem can be infeasible i.e. the constraints are too "tight" and don't allow for a solution. Because the constraints from a standard GP problem hold over even when changed to its convex form via logarithmic transformation, an infeasible GP problem will always be infeasible regardless of convexity. This is why it is important to check for infeasibility prior to creating a linear transformation. This is done by undergoing a feasibility analysis and setting up the GP as follows:


As s nears a value of 1, the original problem nears feasibility. The final goal is to find a solution such that s=1 so that feasibility of the problem is confirmed.1

Applications

Applications of geometric programming include:

  1. Electrical Engineering5
    • Power Control
    • Wire Sizing
    • Routing
    • Optimal doping profile in semiconductor device engineering
    • Digital circuit gate sizing
  2. Chemical Engineering5
    • Optimal reactor design
    • Mass Transfer Optimization
    • Kinetics
    • Maximizing reliability of reactors
  3. Other
    • Transportation6
    • Economic Models
    • Inventory Models


A great example among these varied applications concerns optimal reactor design. The chemical systems within a reactor follow non convex kinetics equations. If one wanted to optimize a reactor for a given system with given reaction rates, one could do so easily with a logarithmic transformation. Take a reaction A + B to C with reaction rate . The logarithmic transformation that would allow one to obtain a convex relation is where . With this new equation, you can optimize your reactor design for your given system.

Conclusion

Geometric Programming is an application that can solve a varying degree of difficult optimization problems but due to the nature of their complexity, these problems cannot be solved easily. Solving geometric problems can be computationally intensive and finding a global optimum solution is not guaranteed. This is due to the fact that GPs involve posynomials and monomials which are by nature non-linear. However, by creating a logarithmic transformation for a problem, one can solve for the globally optimum solution quicker and easier because the GP is changed to its convex form. Logarithmic transformations offer an advantage over other transformations as it allows one to work with smaller values which are less problematic. Logarithmic transformations can be used wherever geometric programming is used which includes applications in chemical engineering and economics.

References

  1. Chiang, M. (2005). Geometric Programming for Communication Systems, Publishers, Inc., ISBN 1-933019-09-3.
  2. Duffin, R.J. (1970). "Linearizing Geometric Programs", SIAM Review 12 (2).
  3. Biswal, K.K., Ohja, A.K. (2010). "Posynomial Geometric Programming Problems with Multiple Parameters", Journal of Computing 2 (1).
  4. Duffin, R.J., Peterson, E.L., Zener, C.M. (1967). Geometric Programming Theory and Application, John Wiley & Sons Inc., ISBN 978-0471223702.
  5. Boyd, S., Hassibi, A., Kim, S.J., Vandenberghe, L. (2007). "A Tutorial on Geometric Programming", Optim Eng 8: 67-127.
  6. Calafiore, G.C., El Ghaoui, L. (2014). Optimization Models and Applications, Cambridge University Press., ISBN 978-1107050877.