Difference between revisions of "Optimization with absolute values"

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Line 184: Line 184:
  
 
The problem has now been reformulated as a linear programming problem that can be solved normally:
 
The problem has now been reformulated as a linear programming problem that can be solved normally:
 +
<ref> Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871. </ref>
  
 
<math>  \begin{align}
 
<math>  \begin{align}
Line 275: Line 276:
 
== References ==
 
== References ==
 
<references />
 
<references />
 
 
 
 
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.
 

Revision as of 23:30, 13 December 2020

Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020

Steward: Fengqi You

Introduction

Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: ) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.

Method

Defining Absolute Values

An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. [1] Thus,

Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. [2]

Absolute Values in Constraints

Within constraints, absolute value relations can be transformed into one of the following forms:

Where is a linear combination ( where are constants) and is a constant .

Form when

In this form, the only possible solution is if simplifying the constraint. Note that this solution also occurs if the constraint is in the form due to the same conclusion that the only possible solution is .

Form when

The second form a linear constraint can exist in is . In this case, an equivalent feasible solution can be described by splitting the constraint into two:

The solution can be understood visually since must lie between and , as shown below:

Number Line X Less Than C.png

Form when

Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:

Number Line for X Greater Than C.png

In expression form, the solutions can be written as:

As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. [3]

An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.

The inequality can be reformulated into the following:

With this new set of constraints, a large constant is introduced, along with a binary variable . So long as is sufficiently larger than the upper bound of , the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if , the new constraints will resolve to:

Since is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: . Functionally, this allows for the XOR logical operation of and .

Absolute Values in Objective Functions

In objective functions, to leverage transformations of absolute functions, all constraints must be linear.

Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:

  • Positive for a minimization problem
  • Negative for a maximization problem

Sign Constraints are Satisfied

At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, .

If is the absolute value term in our objective function, two additional constraints are added to the linear program:

The term in the objective function is then replaced by , relaxing the original function into a collection of linear constraints.

Sign Constraints are Not Satisfied

In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format.

The following constraints need to be added to the problem:

Again, is a large constant, is a replacement variable for in the objective function, and is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that must be equal to and has either a positive or negative value. For instance, for the case of , the new constraints will resolve to:

As is sufficiently large ( must be at least for this approach), the second constraint must be satisfied. Since is non-negative, the fourth constraint must also be satisfied. The remaining constraints, and can only be satisfied when and is of non-negative signage. Together, these constraints will allow for the selection of the largest for maximization problems (or smallest for minimization problems).

Absolute Values in Nonlinear Optimization Problems

The addition of a new variable to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.

Numerical Example

Example when All Sign Constraints are Satisfied

The absolute value quantities will be replaced with single variables:

We must introduce additional constraints to ensure we do not lose any information by doing this substitution:

The problem has now been reformulated as a linear programming problem that can be solved normally:

The optimum value for the objective function is , which occurs when and and .

Example when Sign Constraints are not Satisfied

The absolute value quantities will be replaced with single variables:

We must introduce additional constraints to ensure we do not lose any information by doing this substitution:

The problem has now been reformulated as a linear programming problem that can be solved normally: [4]

The optimum value for the objective function is , which occur when and and .

Applications

Consider the problem . This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).

The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met.

By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s).

Application in Financial: Portfolio Selection

Under this topic, the same tricks played in the Numerical Example section to perform Reduction to a Linear Programming Problem will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below.


A portfolio is determined by what fraction of one's assets to put into each investment. [5] It can be denoted as a collection of nonnegative numbers , where . Because each stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let , the positive parameter, denote the importance of risk relative to the return, and denote the return in the next time period on investment . The total return one would obtain from the investment is . The expected return is . And the Mean Absolute Deviation from the Mean (MAD) is .


maximize

subject to

,

where


Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: . Thus the objective function is turned into:

Now, replace with a new variable and thus the problem can be rewrote as:


maximize

subject to . t = 1, 2,...,T

where

. j = 1, 2,...,n

. t = 1, 2,...,T


So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.


Data Transfer Rate

Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. [6]

The initial model is defined as follows:

where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and are the modulation pulses.

Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:

Conclusion

The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.

References

  1. Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false
  2. "Absolute Values." lp_solve, http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020.
  3. Optimization Methods in Management Science / Operations Research. Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020.
  4. Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.
  5. Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-74388-2_13 https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13
  6. Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization