https://optimization.cbe.cornell.edu/api.php?action=feedcontributions&user=Yilian+Yin&feedformat=atomCornell University Computational Optimization Open Textbook - Optimization Wiki - User contributions [en]2024-03-28T23:44:06ZUser contributionsMediaWiki 1.40.1https://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2615Optimization with absolute values2020-12-14T03:30:05Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<ref> Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871. </ref><br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> <br />
<br />
subject to <math>\sum_j\!x_j = 1</math> <br />
<br />
<math>x_j \geq 0</math> , <math> j = 1,2,..n.</math> <br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2613Optimization with absolute values2020-12-14T03:27:06Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> <br />
<br />
subject to <math>\sum_j\!x_j = 1</math> <br />
<br />
<math>x_j \geq 0</math> , <math> j = 1,2,..n.</math> <br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2612Optimization with absolute values2020-12-14T03:26:05Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> <br />
<br />
subject to <math>\sum_j\!x_j = 1</math> <br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math> <br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2611Optimization with absolute values2020-12-14T03:25:13Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> <br />
<br />
subject to <math>\sum_j\!x_j = 1</math> <br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math> <br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2610Optimization with absolute values2020-12-14T03:20:14Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<math> \begin{align}<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> \\<br />
<br />
subject to <math>\sum_j\!x_j = 1</math> \\<br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math> \\<br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
\end{align} </math><br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2609Optimization with absolute values2020-12-14T03:19:13Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<math> \begin{align}<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> \\<br />
<br />
subject to &<math>\sum_j\!x_j = 1</math> \\<br />
<br />
&<math>x_j \geq 0</math> <math> j = 1,2,..n.</math> \\<br />
<br />
where &<math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
\end{align} </math><br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2521Optimization with absolute values2020-12-13T20:09:48Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2519Optimization with absolute values2020-12-13T20:09:08Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020. https://www.jstor.org/stable/168871?seq=1</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2516Optimization with absolute values2020-12-13T19:32:05Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2514Optimization with absolute values2020-12-13T19:31:16Z<p>Yilian Yin: /* Data Transfer Rate */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2511Optimization with absolute values2020-12-13T19:27:19Z<p>Yilian Yin: /* Data Transfer Rate */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2509Optimization with absolute values2020-12-13T19:26:37Z<p>Yilian Yin: /* Applications */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_n,0</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2498Optimization with absolute values2020-12-13T17:36:27Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_n</math> ,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2497Optimization with absolute values2020-12-13T17:36:03Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_n,0 ∈ {+1, −1} </math> are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2496Optimization with absolute values2020-12-13T17:35:47Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math> x_n </math> ,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2495Optimization with absolute values2020-12-13T17:35:19Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_n,0 ∈ {+1, −1} </math> are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2493Optimization with absolute values2020-12-13T17:34:25Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2491Optimization with absolute values2020-12-13T17:13:05Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and hn (t) (n = 1,...,N) are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
#Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. JSTOR, www.jstor.org/stable/168871. Accessed 13 Dec. 2020.<br />
#Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2488Optimization with absolute values2020-12-13T17:08:23Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and hn (t) (n = 1,...,N) are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
Alqahtani, J., 2019. https://iarjset.com/wp-content/uploads/2019/12/IARJSET.2019.61204.pdf. IARJSET, 6(12), pp.14-16. https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2485Optimization with absolute values2020-12-13T16:54:26Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and hn (t) (n = 1,...,N) are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2373Optimization with absolute values2020-12-12T21:41:45Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
==Numerical Example==<br />
* '''Example when all sign constraints are satisfied'''<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
* '''Example when some sign constraints are not satisfied'''<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and hn (t) (n = 1,...,N) are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13<br />
# Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2372Optimization with absolute values2020-12-12T21:38:25Z<p>Yilian Yin: /* References */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
==Numerical Example==<br />
* '''Example when all sign constraints are satisfied'''<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
* '''Example when some sign constraints are not satisfied'''<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
*'''data transfer rate'''<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and hn (t) (n = 1,...,N) are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
<references /><br />
<br />
<br />
<br />
<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki><br />
<br />
# Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2367Optimization with absolute values2020-12-12T20:32:38Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math>\displaystyle |X| = 0</math><br />
<br />
<math>\displaystyle |X| \le C</math><br />
<br />
<math>\displaystyle |X| \ge C</math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math>\displaystyle X\leq C</math><br />
<br />
<math>\displaystyle -X\leq C</math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math>\displaystyle X\geq C</math><br />
<br />
<math>\displaystyle -X\geq C</math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math>\displaystyle X + N*Y \ge C</math><br />
<br />
<math>\displaystyle -X + N*(1-Y) \ge C</math><br />
<br />
<math>\displaystyle Y = 0, 1</math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math>\displaystyle X \ge C</math><br />
<br />
<math>\displaystyle -X + N \ge C</math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math>\displaystyle X\leq Z</math><br />
<br />
<math>\displaystyle -X\leq Z</math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math>\displaystyle X + N*Y \ge Z</math><br />
<br />
<math>\displaystyle -X + N*(1-Y) \ge Z</math><br />
<br />
<math>\displaystyle X \le Z</math><br />
<br />
<math>\displaystyle -X \le Z</math><br />
<br />
<math>\displaystyle Y = 0, 1</math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math>\displaystyle X \ge Z</math><br />
<br />
<math>\displaystyle -X + N \ge Z</math><br />
<br />
<math>\displaystyle X \le Z</math><br />
<br />
<math>\displaystyle -X \le Z</math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
==Numerical Example==<br />
* '''Example when all sign constraints are satisfied'''<br />
<math> \begin{align}<br />
\min{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14 \\<br />
-U_1 \le x_1 \le U_1 \\<br />
-U_2 \le x_2 \le U_2 \\<br />
-U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
* '''Example when some sign constraints are not satisfied'''<br />
<math> \begin{align}<br />
\min{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> x_3 + M*Y \ge U_3 </math><br />
<br />
<math> -x_3 + M*(1-Y) \ge U_3 </math><br />
<br />
<math> x_3 \le U_3 </math><br />
<br />
<math> -x_3 \le U_3 </math><br />
<br />
<math> Y = 0,1 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14 \\<br />
-U_1 \le x_1 \le U_1 \\<br />
-U_2 \le x_2 \le U_2 \\<br />
x_3 + M*Y \ge U_3 \\<br />
-x_3 + M*(1-Y) \ge U_3 \\<br />
x_3 \le U_3 \\<br />
-x_3 \le U_3 \\<br />
Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
*'''data transfer rate'''<br />
Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization.<br />
<br />
The initial model is defined as follows:<br />
<math> x_0 (t) = N n=1 xn,0hn (t), t ∈ [0,T] <\math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, xn,0 ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and hn (t) (n = 1,...,N) are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved.<br />
<math> min z∈RN λy − Hz2 2 + 1 2 z − 1N 1 + 1 2 z + 1N 1 <\math><br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
# "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/5.1/absolute.htm. Accessed 20 Nov. 2020.<br />
# ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020.<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2366Optimization with absolute values2020-12-12T20:27:29Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math>\displaystyle |X| = 0</math><br />
<br />
<math>\displaystyle |X| \le C</math><br />
<br />
<math>\displaystyle |X| \ge C</math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math>\displaystyle X\leq C</math><br />
<br />
<math>\displaystyle -X\leq C</math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math>\displaystyle X\geq C</math><br />
<br />
<math>\displaystyle -X\geq C</math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math>\displaystyle X + N*Y \ge C</math><br />
<br />
<math>\displaystyle -X + N*(1-Y) \ge C</math><br />
<br />
<math>\displaystyle Y = 0, 1</math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math>\displaystyle X \ge C</math><br />
<br />
<math>\displaystyle -X + N \ge C</math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math>\displaystyle X\leq Z</math><br />
<br />
<math>\displaystyle -X\leq Z</math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math>\displaystyle X + N*Y \ge Z</math><br />
<br />
<math>\displaystyle -X + N*(1-Y) \ge Z</math><br />
<br />
<math>\displaystyle X \le Z</math><br />
<br />
<math>\displaystyle -X \le Z</math><br />
<br />
<math>\displaystyle Y = 0, 1</math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math>\displaystyle X \ge Z</math><br />
<br />
<math>\displaystyle -X + N \ge Z</math><br />
<br />
<math>\displaystyle X \le Z</math><br />
<br />
<math>\displaystyle -X \le Z</math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
==Numerical Example==<br />
* '''Example when all sign constraints are satisfied'''<br />
<math> \begin{align}<br />
\min{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14 \\<br />
-U_1 \le x_1 \le U_1 \\<br />
-U_2 \le x_2 \le U_2 \\<br />
-U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
* '''Example when some sign constraints are not satisfied'''<br />
<math> \begin{align}<br />
\min{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> x_3 + M*Y \ge U_3 </math><br />
<br />
<math> -x_3 + M*(1-Y) \ge U_3 </math><br />
<br />
<math> x_3 \le U_3 </math><br />
<br />
<math> -x_3 \le U_3 </math><br />
<br />
<math> Y = 0,1 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14 \\<br />
-U_1 \le x_1 \le U_1 \\<br />
-U_2 \le x_2 \le U_2 \\<br />
x_3 + M*Y \ge U_3 \\<br />
-x_3 + M*(1-Y) \ge U_3 \\<br />
x_3 \le U_3 \\<br />
-x_3 \le U_3 \\<br />
Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
# "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/5.1/absolute.htm. Accessed 20 Nov. 2020.<br />
# ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020.<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2365Optimization with absolute values2020-12-12T20:25:21Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math>\displaystyle |X| = 0</math><br />
<br />
<math>\displaystyle |X| \le C</math><br />
<br />
<math>\displaystyle |X| \ge C</math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math>\displaystyle X\leq C</math><br />
<br />
<math>\displaystyle -X\leq C</math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math>\displaystyle X\geq C</math><br />
<br />
<math>\displaystyle -X\geq C</math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math>\displaystyle X + N*Y \ge C</math><br />
<br />
<math>\displaystyle -X + N*(1-Y) \ge C</math><br />
<br />
<math>\displaystyle Y = 0, 1</math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math>\displaystyle X \ge C</math><br />
<br />
<math>\displaystyle -X + N \ge C</math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math>\displaystyle X\leq Z</math><br />
<br />
<math>\displaystyle -X\leq Z</math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math>\displaystyle X + N*Y \ge Z</math><br />
<br />
<math>\displaystyle -X + N*(1-Y) \ge Z</math><br />
<br />
<math>\displaystyle X \le Z</math><br />
<br />
<math>\displaystyle -X \le Z</math><br />
<br />
<math>\displaystyle Y = 0, 1</math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math>\displaystyle X \ge Z</math><br />
<br />
<math>\displaystyle -X + N \ge Z</math><br />
<br />
<math>\displaystyle X \le Z</math><br />
<br />
<math>\displaystyle -X \le Z</math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
==Numerical Example==<br />
* '''Example when all sign constraints are satisfied'''<br />
<math> \begin{align}<br />
\min{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14 \\<br />
-U_1 \le x_1 \le U_1 \\<br />
-U_2 \le x_2 \le U_2 \\<br />
-U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
* '''Example when some sign constraints are not satisfied'''<br />
<math> \begin{align}<br />
\min{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> x_3 + M*Y \ge U_3 </math><br />
<br />
<math> -x_3 + M*(1-Y) \ge U_3 </math><br />
<br />
<math> x_3 \le U_3 </math><br />
<br />
<math> -x_3 \le U_3 </math><br />
<br />
<math> Y = 0,1 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<br />
<math> \begin{align}<br />
\min{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. x_1 + 2x_2 - 3x_3 \le 8 \\<br />
2x_1 - x_2 + 4x_3= 14 \\<br />
-U_1 \le x_1 \le U_1 \\<br />
-U_2 \le x_2 \le U_2 \\<br />
x_3 + M*Y \ge U_3 \\<br />
-x_3 + M*(1-Y) \ge U_3 \\<br />
x_3 \le U_3 \\<br />
-x_3 \le U_3 \\<br />
Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max quad\ z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and Rj denote the return in the next time period on investment j, j = 1, 2,..., n. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
# "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/5.1/absolute.htm. Accessed 20 Nov. 2020.<br />
# ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020.<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2258Optimization with absolute values2020-12-10T01:59:00Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
The second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, several conditions must be fulfilled:<br />
<br />
# First, the constraints must all be linear.<br />
# Secondly, the coefficient signs of the absolute terms must all be:<br />
#* Positive for a minimization problem<br />
#* Negative for a maximization problem<br />
<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – we aim to bound the solution space for the absolute value term with a new variable, <math display="inline">Y </math>.<br />
<br />
If <math>|X| </math> is the absolute value term in our objective function, where <math display="inline">X </math> is a linear combination of variables,<br />
<br />
Two additional constraints are added to the linear program:<br />
<br />
<math>X \le Y </math><br />
<br />
<math>-X \le Y </math><br />
<br />
This is a relaxation of the original function into a collection of linear constraints.<br />
<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format.<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers xj, where j = 1, 2,...,n. Because each xj stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and Rj denote the return in the next time period on investment j, j = 1, 2,..., n. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
# "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/5.1/absolute.htm. Accessed 20 Nov. 2020.<br />
# ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020.<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2020Optimization with absolute values2020-12-06T00:36:04Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
The second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, several conditions must be fulfilled:<br />
<br />
# First, the constraints must all be linear.<br />
# Secondly, the coefficient signs of the absolute terms must all be:<br />
#* Positive for a minimization problem<br />
#* Negative for a maximization problem<br />
<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – we aim to bound the solution space for the absolute value term with a new variable, <math display="inline">Y </math>.<br />
<br />
If <math>|X| </math> is the absolute value term in our objective function, where <math display="inline">X </math> is a linear combination of variables,<br />
<br />
Two additional constraints are added to the linear program:<br />
<br />
<math>X \le Y </math><br />
<br />
<math>-X \le Y </math><br />
<br />
This is a relaxation of the original function into a collection of linear constraints.<br />
<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format.<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, we are going to use the same tricks we played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''', to reform the problem into a MILP, in order to solve the problem. Let's have a look of the example. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers xj, where j = 1, 2,...,n. Because each xj stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and Rj denote the return in the next time period on investment j, j = 1, 2,..., n. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. So we have to use the same trick we used in the numeric example, to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, we decide to take the average of the historical returns in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function turns into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
# "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/5.1/absolute.htm. Accessed 20 Nov. 2020.<br />
# ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020.<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1629Optimization with absolute values2020-11-22T19:59:25Z<p>Yilian Yin: /* Applications */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, we are going to use the same tricks we played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''', to reform the problem into a MILP, in order to solve the problem. Let's have a look of the example. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers xj, where j = 1, 2,...,n. Because each xj stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and Rj denote the return in the next time period on investment j, j = 1, 2,..., n. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. So we have to use the same trick we used in the numeric example, to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, we decide to take the average of the historical returns in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function turns into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1627Optimization with absolute values2020-11-22T19:26:48Z<p>Yilian Yin: /* Applications */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, we are going to use the same tricks we played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''', to reform the problem into a MILP, in order to solve the problem. Let's have a look of the example. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers xj, where j = 1, 2,...,n. Because each xj stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and Rj denote the return in the next time period on investment j, j = 1, 2,..., n. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>\!x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. So we have to use the same trick we used in the numeric example, to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, we decide to take the average of the historical returns in order to get the mean expected return: <math>\!r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j (t)</math>Thus the objective function turns into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(\tilde{R}_j (t) - \!r_j\bigr)\right\vert <br />
</math><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1622Optimization with absolute values2020-11-22T18:41:47Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Portfolio Selection'''<br />
Under this topic, we are going to use the same tricks we played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''', to reform the problem into a MILP, in order to solve the problem. Let's have a look of the example. <br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. It can be denoted as a collection of nonnegative numbers xj, where j = 1, 2,...,n. Because each xj stands for a portion of the assets, it sums to one. In order to get a highest profit through finding a right mix of assets, let Rj denote the return in the next time period on investment j, j = 1, 2,..., n. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
maximize <math display="inline">\mu\sum_{j}\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_{j} \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>\!x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where. <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
As formulated, this problem is not a linear programming problem. We use the same trick we used in the numeric example to replace each absolute value with a new variable and then impose inequality constraints that ensure that the new variable will indeed be the appropriate absolute value once an optimal value to the problem has been obtained. But first, let us rewrite with the expected value operation replaced by a simple averaging over the given historical data:<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1618Optimization with absolute values2020-11-22T18:07:15Z<p>Yilian Yin: /* Applications */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer linear program, in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Reduction to a Linear Programming Problem'''<br />
<br />
maximize <math display="inline">\mu\sum_{j}\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_{j} \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>\!x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>R = \sum_{j}\!x_j\!R_j </math> <br />
<br />
<math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math> <br />
<br />
<math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>As formulated, this problem is not a linear programming problem. We use the same trick we used in the numeric example to replace each absolute value with a new variable and then impose inequality constraints that ensure that the new variable will indeed be the appropriate absolute value once an optimal value to the problem has been obtained. But first, let us rewrite with the expected value operation replaced by a simple averaging over the given historical data:<br />
<br />
<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf<br />
# Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki></div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1616Optimization with absolute values2020-11-22T17:52:08Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer linear program, in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Reduction to a Linear Programming Problem'''<br />
<br />
maximize <math display="inline">\mu\sum_{j}\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_{j} \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>\!x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <br />
<br />
<br />
As formulated, this problem is not a linear programming problem. We use the same trick we used in the numeric example to replace each absolute value with a new variable and then impose inequality constraints that ensure that the new variable will indeed be the appropriate absolute value once an optimal value to the problem has been obtained. But first, let us rewrite with the expected value operation replaced by a simple averaging over the given historical data:<br />
<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1615Optimization with absolute values2020-11-22T17:51:48Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer linear program, in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
WIP<br />
<br />
3<br />
<br />
* '''Application in Financial:''' '''Reduction to a Linear Programming Problem'''<br />
<br />
maximize <math display="inline">\mu\sum_{j}\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_{j} \!x_j\tilde{R}_j \right\vert </math><br />
<br />
subject to <math>\sum_j\!x_j = 1</math><br />
<br />
<math>\!x_j \geq 0</math> <math> j = 1,2,..n.</math><br />
<br />
where <math>\!R = \sum_j\!x_j\!R_j</math><br />
<br />
<br />
As formulated, this problem is not a linear programming problem. We use the same trick we used in the numeric example to replace each absolute value with a new variable and then impose inequality constraints that ensure that the new variable will indeed be the appropriate absolute value once an optimal value to the problem has been obtained. But first, let us rewrite with the expected value operation replaced by a simple averaging over the given historical data:<br />
<br />
<br />
<br />
<br />
<br />
Let's see some other applications:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1611Optimization with absolute values2020-11-22T17:15:33Z<p>Yilian Yin: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
<br />
<br />
* '''Reduction to a Linear Programming Problem'''<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
WIP<br />
<br />
3<br />
<br />
<br />
As formulated, the problem in (13.1) is not a linear programming problem. We use the same trick we used<br />
<br />
in the previous chapter to replace each absolute value with a new variable and then<br />
<br />
impose inequality constraints that ensure that the new variable will indeed be the appropriate<br />
<br />
absolute value once an optimal value to the problem has been obtained. But<br />
<br />
first, let us rewrite (13.1) with the expected value operation replaced by a simple averaging<br />
<br />
over the given historical data:<br />
<br />
<br />
<br />
By reformulating the original problem into a Mixed-Integer linear program, in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem.<br />
<br />
Let's see how through several more specific application cases categorized as below:<br />
<br />
* '''Minimizing the sum of absolute deviations'''<br />
<br />
* '''Minimizing the maximum of absolute values'''<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1586Optimization with absolute values2020-11-22T07:45:21Z<p>Yilian Yin: /* Applications */</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
WIP<br />
<br />
3<br />
<br />
<br />
By reformulating the original problem into a Mixed-Integer linear program, in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem.<br />
<br />
Let's see how through several cases categorized as below:<br />
<br />
'''Minimizing the sum of absolute deviations'''<br />
<br />
'''Minimizing the maximum of absolute values'''<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf</div>Yilian Yinhttps://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=1585Optimization with absolute values2020-11-22T07:44:33Z<p>Yilian Yin: Added several lines onto applications</p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (), Brian Amado (ba392), Peter (pmw99), Dewei Xiao (dx58) - SYSEN 5800 Fall 2020<br />
<br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, you can go on to solve the problem using linear programing techniques.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. Thus,<br />
<br />
<math>|x| = \begin{cases} -x, & \text{if }x < 0 \\ x, & \text{if }x \ge 0 \end{cases} </math><br />
<br />
Absolute values can exist in optimization problems in two primary instances: in constraints and in the objective function.<br />
<br />
=== Absolute Values in Constraints ===<br />
Within linear equations, linear constraints can exist in several forms.<br />
<br />
The first form exists as <math>|X| = 0 </math>, where <math display="inline">X </math> is a linear combination of variables.<br />
<br />
In this case, the only solution is if <math>|X| = 0 </math>, simplifying the constraint to <math>X = 0 </math>. Note that this solution also occurs if the constraint is in the form <math>|X| <= 0 </math> due to the same conclusion (only solution <math>X = 0 </math>).<br />
<br />
<br />
Second form a linear constraint can exist in is <math>|X| \le C </math> where <math display="inline">X </math> remains a linear combination of variables and constant <math display="inline">C > 0 </math>.<br />
<br />
In this case, we can describe an equivalent feasible solution by splitting the inequality into<br />
<br />
<math>X \le C </math><br />
<br />
<math>-X \le C </math><br />
<br />
We can understand this visually as the solution <math display="inline">X </math> must lie between <math display="inline">-C<br />
</math> and <math display="inline">C </math>, as shown below:<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
<br />
The last case for linear constraints is when <math>|X| \ge C </math>.<br />
<br />
Visually, the solution space is the complement of the second solution above, resulting in the following representation:<br />
[[File:Number Line for X Greater Than C.png|none|thumb]]<br />
<br />
<br />
In expression form, the solutions can be written as:<br />
<br />
<math>X \ge C </math><br />
<br />
<math>-X \ge C </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. An approach to reach a solution for this particular case exists in the form of [[Mixed-Integer Linear Programming]], where only one of the equations above is “active”.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
WIP<br />
<br />
==Numerical Example==<br />
<br />
<math>\min{|x_1| + 2|x_2| + |x_3|} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
We replace the absolute value quantities with a single variable:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math>\min{ U_1 + 2U_2 + U_3} </math><br />
<br />
<math> \begin{align}<br />
\ s.t. x_1 + x_2 - x_3 \le 10 \\<br />
x_1 - 3x_2 + 2x_3= 12<br />
\end{align}</math><br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The optimum value for the objective function is <math>6</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 6 </math>.<br />
<br />
== Applications ==<br />
There are no specific applications to Optimization with Absolute Values however it is necessary to account for at times when utilizing the simplex method.<br />
<br />
<br />
<br />
Consider the problem Ax=b; max z= x c,jx,i. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
WIP<br />
<br />
3<br />
<br />
<br />
By reformulating the original problem into a Mixed-Integer linear program, in most case we should be able to use GAMS/Pyomo/JuliaOPT to solve the problem. <br />
<br />
Let's see how through several cases categorized as below:<br />
<br />
'''Minimizing the sum of absolute deviations'''<br />
<br />
<br />
'''Minimizing the maximum of absolute values'''<br />
<br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method.<br />
<br />
== References ==<br />
To be formatted:<br />
<br />
# http://lpsolve.sourceforge.net/5.1/absolute.htm<br />
# https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf<br />
# https://www.ise.ncsu.edu/fuzzy-neural/wp-content/uploads/sites/9/2019/08/LP-Abs-Value.pdf</div>Yilian Yin