https://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&feed=atom&action=historyAdaptive robust optimization - Revision history2024-03-29T15:35:51ZRevision history for this page on the wikiMediaWiki 1.40.1https://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=2740&oldid=prevWc593 at 11:41, 21 December 20202020-12-21T11:41:15Z<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 07:41, 21 December 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Author: Ralph Wang (ChemE 6800 Fall 2020)</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Author: Ralph Wang (ChemE 6800 Fall 2020)</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;"></del></div></td><td colspan="2" class="diff-side-added"></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">Steward: Allen Yang, Fengqi You</del></div></td><td colspan="2" class="diff-side-added"></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Introduction ==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Introduction ==</div></td></tr>
</table>Wc593https://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=2684&oldid=prevRalph Wang: Minor error with the allowed polyhedral uncertainty set, second sentence in paragraph "Under what conditions are ADRs optimal?" number of vertices was off by one, increased to the right number now2020-12-16T00:46:02Z<p>Minor error with the allowed polyhedral uncertainty set, second sentence in paragraph "Under what conditions are ADRs optimal?" number of vertices was off by one, increased to the right number now</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 20:46, 15 December 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l89">Line 89:</td>
<td colspan="2" class="diff-lineno">Line 89:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The above solution method, although simple and tractable, suffers from potential sub optimality of ADR. Indeed, Ben-Tal motivates this assumption citing only the tractability of the result. In real world scenarios, this sub optimality can be mitigated by using ADR to make the initial decision, then resolving the problem after <math>u</math> is revealed. That is, if solving the L2ARO gives <math>x_1^*</math> as the optimal <math>x_1</math> and <math>x_2^*(u)</math> as the optimal <math>x_2(u)</math>, decision <math>x_1^*</math> is implemented immediately; when <math>u</math> is revealed (to be, say, <math>u^*</math>), decision <math>x_2</math> is decided not by computing <math>x_2^*(u^*)</math>, but by re-solving the whole problem fixing <math>x_1</math> to <math>x_1^*</math> and fixing <math>u</math> to <math>u^*</math>. This method reflects the wait-and-see nature of the decision <math>x_2</math> - ADR is used to find a pretty-good <math>x_1</math>, then <math>u</math> is revealed, then the information is used to solve for the optimal <math>x_2</math> in that circumstance.<ref>Ben-Tal, A., Golany, B., Nemirovski, A., Vial, J. (2005). Retailer-Supplier Flexible Commitments Contracts: A Robust Optimization Approach. <i>Manufacturing and Service Operations Management</i> 7(3):248-271.</ref> This iterative, stage-by-stage solution performs better than using only ADR, but is feasible only when there is enough time between stages to re-solve the problem. Further, numerical experiments indicate that classical robust optimization models yield equally good, if not better initial decisions than ADR on L2ARO, limiting ADR on L2ARO to situations where the problem cannot be feasibly re-solved, or in the special cases where the ADR approximation actually generates the optimal solution.<ref>Gorissen, B., Yanikognlu, I., den Hertog, D. (2015). A Practical Guide to Robust Optimization. <i>Omega</i> 53:124-137.</ref></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The above solution method, although simple and tractable, suffers from potential sub optimality of ADR. Indeed, Ben-Tal motivates this assumption citing only the tractability of the result. In real world scenarios, this sub optimality can be mitigated by using ADR to make the initial decision, then resolving the problem after <math>u</math> is revealed. That is, if solving the L2ARO gives <math>x_1^*</math> as the optimal <math>x_1</math> and <math>x_2^*(u)</math> as the optimal <math>x_2(u)</math>, decision <math>x_1^*</math> is implemented immediately; when <math>u</math> is revealed (to be, say, <math>u^*</math>), decision <math>x_2</math> is decided not by computing <math>x_2^*(u^*)</math>, but by re-solving the whole problem fixing <math>x_1</math> to <math>x_1^*</math> and fixing <math>u</math> to <math>u^*</math>. This method reflects the wait-and-see nature of the decision <math>x_2</math> - ADR is used to find a pretty-good <math>x_1</math>, then <math>u</math> is revealed, then the information is used to solve for the optimal <math>x_2</math> in that circumstance.<ref>Ben-Tal, A., Golany, B., Nemirovski, A., Vial, J. (2005). Retailer-Supplier Flexible Commitments Contracts: A Robust Optimization Approach. <i>Manufacturing and Service Operations Management</i> 7(3):248-271.</ref> This iterative, stage-by-stage solution performs better than using only ADR, but is feasible only when there is enough time between stages to re-solve the problem. Further, numerical experiments indicate that classical robust optimization models yield equally good, if not better initial decisions than ADR on L2ARO, limiting ADR on L2ARO to situations where the problem cannot be feasibly re-solved, or in the special cases where the ADR approximation actually generates the optimal solution.<ref>Gorissen, B., Yanikognlu, I., den Hertog, D. (2015). A Practical Guide to Robust Optimization. <i>Omega</i> 53:124-137.</ref></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with <del style="font-weight: bold; text-decoration: none;">the same </del>number of vertices <del style="font-weight: bold; text-decoration: none;">as </del><math>b(u)</math>'s dimension.<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref> In a 2016 paper, Ben-Tal and colleagues noted that whenever the <math>A(u)</math> matrices are independent of <math>u</math>, then a <i>piecewise</i> ADR can be optimal, albeit one with a large number of pieces.<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref> ADRs can be optimal in other, more specific cases, but these cases will not be discussed here.<ref>Iancu, D.A., Parrilo, P.A.(2010). Optimality of Affine Policies in Multistage Robust Optimization. <i>Mathematics of Operations Research</i> 35(2):363-394</ref><ref>Dan A. Iancu, Mayank Sharma, Maxim Sviridenko (2013) Supermodularity and Affine Policies in Dynamic Robust Optimization. <i>Operations Research</i> 61(4):941-956</ref></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with <ins style="font-weight: bold; text-decoration: none;">a </ins>number of vertices <ins style="font-weight: bold; text-decoration: none;">one more than </ins><math>b(u)</math>'s dimension.<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref> In a 2016 paper, Ben-Tal and colleagues noted that whenever the <math>A(u)</math> matrices are independent of <math>u</math>, then a <i>piecewise</i> ADR can be optimal, albeit one with a large number of pieces.<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref> ADRs can be optimal in other, more specific cases, but these cases will not be discussed here.<ref>Iancu, D.A., Parrilo, P.A.(2010). Optimality of Affine Policies in Multistage Robust Optimization. <i>Mathematics of Operations Research</i> 35(2):363-394</ref><ref>Dan A. Iancu, Mayank Sharma, Maxim Sviridenko (2013) Supermodularity and Affine Policies in Dynamic Robust Optimization. <i>Operations Research</i> 61(4):941-956</ref></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In most cases, however, ADRs are suboptimal, and it becomes useful to characterize its degree of suboptimality. The most common approach is to generate upper and lower bounds on the optimal value of the objective function. If the goal is to minimize the objective function, then any valid solution (via ADRs or some other method) gives an upper bound, so the problem reduces to computing lower bounds. A simple approach to doing so is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), solve the model at each of these points, and find the worst case among these sampled solutions. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective.<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref> This method, although simple, generates excessively optimistic lower bounds unless a large number of points are sampled, but solving the model at many such points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the lower bound quality and computation time for this method.<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref> For example, Bertsimias and De Ruiter discovered that constructing the dual and sampling the dual uncertainty set gives better bounds and faster computation time.<ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In most cases, however, ADRs are suboptimal, and it becomes useful to characterize its degree of suboptimality. The most common approach is to generate upper and lower bounds on the optimal value of the objective function. If the goal is to minimize the objective function, then any valid solution (via ADRs or some other method) gives an upper bound, so the problem reduces to computing lower bounds. A simple approach to doing so is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), solve the model at each of these points, and find the worst case among these sampled solutions. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective.<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref> This method, although simple, generates excessively optimistic lower bounds unless a large number of points are sampled, but solving the model at many such points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the lower bound quality and computation time for this method.<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref> For example, Bertsimias and De Ruiter discovered that constructing the dual and sampling the dual uncertainty set gives better bounds and faster computation time.<ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref></div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=2683&oldid=prevRalph Wang: fixed typos2020-12-16T00:43:34Z<p>fixed typos</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 20:43, 15 December 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l9">Line 9:</td>
<td colspan="2" class="diff-lineno">Line 9:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Problem Formulation ==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Problem Formulation ==</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Suppose, for an optimization problem of interest, <math>S</math> is the set of allowed decisions and <math>x</math> is a decision in <math>S</math>. Let <math>u</math> <del style="font-weight: bold; text-decoration: none;">bee </del>a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function <math>f(u, x)</math>, and we want <math>x</math> to adhere to a set of constraints <math>g(u, x) \leq 0</math>, then the problem may be formulated as:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Suppose, for an optimization problem of interest, <math>S</math> is the set of allowed decisions and <math>x</math> is a decision in <math>S</math>. Let <math>u</math> <ins style="font-weight: bold; text-decoration: none;">be </ins>a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function <math>f(u, x)</math>, and we want <math>x</math> to adhere to a set of constraints <math>g(u, x) \leq 0</math>, then the problem may be formulated as:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align}\text{minimize, choosing x: } f&(u, x)\\</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align}\text{minimize, choosing x: } f&(u, x)\\</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l21">Line 21:</td>
<td colspan="2" class="diff-lineno">Line 21:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In this formulation, we call <math>f</math> the objective function and <math>g</math> the constraint function.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In this formulation, we call <math>f</math> the objective function and <math>g</math> the constraint function.</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>If <math>u</math> is known, then the problem can be solved using methods such as branch and cut or <del style="font-weight: bold; text-decoration: none;">KKT </del>conditions. However, in many real world scenarios, <math>u</math> is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of <math>u</math>, called the uncertainty set <math>U</math>, and solves for the decision <math>x</math> such that the constraint <math>g</math> is satisfied in all cases and <math>f</math> is optimized for the worst case. The problem can be written as:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>If <math>u</math> is known, then the problem can be solved using methods such as branch and cut or <ins style="font-weight: bold; text-decoration: none;">Karush-Kuhn-Tucker </ins>conditions. However, in many real world scenarios, <math>u</math> is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of <math>u</math>, called the uncertainty set <math>U</math>, and solves for the decision <math>x</math> such that the constraint <math>g</math> is satisfied in all cases and <math>f</math> is optimized for the worst case. The problem can be written as:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align}\text{min}(x)\text{ max}(u)\;&f(u, x)\\</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align}\text{min}(x)\text{ max}(u)\;&f(u, x)\\</div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1963&oldid=prevRalph Wang: fixed a reference2020-11-30T19:32:57Z<p>fixed a reference</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 15:32, 30 November 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l51">Line 51:</td>
<td colspan="2" class="diff-lineno">Line 51:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Where <math>f(x_1)</math> was redefined to be the part of <math>x_1</math> representing <math>t</math>.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Where <math>f(x_1)</math> was redefined to be the part of <math>x_1</math> representing <math>t</math>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>For many problems of interest, the functions <math>f</math> and <math>g</math> vary linearly with <math>x_1</math> and <math>x_2</math>, that is, they are affine functions of <math>x_1</math> and <math>x_2</math><del style="font-weight: bold; text-decoration: none;">[FIX THIS REF]</del>. In such cases, if <math>x_1</math> and <math>x_2</math> are treated as vectors, then we can write:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>For many problems of interest, the functions <math>f</math> and <math>g</math> vary linearly with <math>x_1</math> and <math>x_2</math>, that is, they are affine functions of <math>x_1</math> and <math>x_2</math>.<ins style="font-weight: bold; text-decoration: none;"><ref>B. Hu and L. Wu, "Robust SCUC Considering Continuous/Discrete Uncertainties and Quick-Start Units: A Two-Stage Robust Optimization With Mixed-Integer Recourse," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 2, pp. 1407-1419, March 2016, doi: 10.1109/TPWRS.2015.2418158.</ref><ref>J. Warrington, C. Hohl, P. J. Goulart and M. Morari, "Rolling Unit Commitment and Dispatch With Multi-Stage Recourse Policies for Heterogeneous Devices," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 1, pp. 187-197, Jan. 2016, doi: 10.1109/TPWRS.2015.2391233.</ref><ref>Chuen-Teck See, Melvyn Sim, (2010) Robust Approximation to Multiperiod Inventory Management. <i>Operations Research</i> 58(3):583-594.</ref><ref>Marcus Ang, Yun Fong Lim, Melvyn Sim, (2012) Robust Storage Assignment in Unit-Load Warehouses. <i>Management Science</i> 58(11):2114-2130.</ref><ref>Mattia, S., Rossi, F., Servilio, M., Smriglio, S. (2017). Staffing and Scheduling Flexible Call Centers by Two-Stage Robust Optimization. <i>Omega</i> 72:25-37.</ref><ref>Gong, J. and You, F. (2017), Optimal processing network design under uncertainty for producing fuels and value‐added bioproducts from microalgae: Two‐stage adaptive robust mixed integer fractional programming model and computationally efficient solution algorithm. <i>AIChE J.</i>, 63: 582-600.</ref> </ins>In such cases, if <math>x_1</math> and <math>x_2</math> are treated as vectors, then we can write:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>f(x_1) = c^Tx_1</math></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>f(x_1) = c^Tx_1</math></div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1962&oldid=prevRalph Wang: /* Problem Formulation */ technical edit on rewriting x_2 as x_2(u)2020-11-30T19:32:00Z<p><span dir="auto"><span class="autocomment">Problem Formulation: </span> technical edit on rewriting x_2 as x_2(u)</span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 15:32, 30 November 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l33">Line 33:</td>
<td colspan="2" class="diff-lineno">Line 33:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The reasoning used in this construction can be extended to multi-stage formulations.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The reasoning used in this construction can be extended to multi-stage formulations.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because <math>x_2</math> is selected only after the uncertain parameter <math>u</math> is revealed, <math>x_2</math> is a function of <math>u</math>. <del style="font-weight: bold; text-decoration: none;">Then, the function </del><math>x_2<del style="font-weight: bold; text-decoration: none;">(u)</del></math> <del style="font-weight: bold; text-decoration: none;">is independent </del>of <math>u</math><del style="font-weight: bold; text-decoration: none;">, so </del>the function <del style="font-weight: bold; text-decoration: none;">can be decided </del>before <math>u</math> <del style="font-weight: bold; text-decoration: none;">is revealed. This </del>allows <del style="font-weight: bold; text-decoration: none;">us to rewrite </del>the problem as:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because <math>x_2</math> is selected only after the uncertain parameter <math>u</math> is revealed, <math>x_2</math> is a function of <math>u</math>. <ins style="font-weight: bold; text-decoration: none;">Expressing </ins><math>x_2</math> <ins style="font-weight: bold; text-decoration: none;">as a function </ins>of <math>u</math> <ins style="font-weight: bold; text-decoration: none;">allows us to choose </ins>the function <ins style="font-weight: bold; text-decoration: none;"><math>x_2(u)</math> </ins>before <ins style="font-weight: bold; text-decoration: none;">learning </ins><math>u</math><ins style="font-weight: bold; text-decoration: none;">, which </ins>allows the problem <ins style="font-weight: bold; text-decoration: none;">to be rewritten </ins>as:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align}\text{min}(x_1, x_2(u))\text{ max}(u) \; &f(u, x_1, x_2(u))\\</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align}\text{min}(x_1, x_2(u))\text{ max}(u) \; &f(u, x_1, x_2(u))\\</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l51">Line 51:</td>
<td colspan="2" class="diff-lineno">Line 51:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Where <math>f(x_1)</math> was redefined to be the part of <math>x_1</math> representing <math>t</math>.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Where <math>f(x_1)</math> was redefined to be the part of <math>x_1</math> representing <math>t</math>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>For many problems of interest, the functions <math>f</math> and <math>g</math> vary linearly with <math>x_1</math> and <math>x_2</math>, that is, they are affine functions of <math>x_1</math> and <math>x_2</math>[<del style="font-weight: bold; text-decoration: none;">ref various papers</del>]. In such cases, if <math>x_1</math> and <math>x_2</math> are treated as vectors, then we can write:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>For many problems of interest, the functions <math>f</math> and <math>g</math> vary linearly with <math>x_1</math> and <math>x_2</math>, that is, they are affine functions of <math>x_1</math> and <math>x_2</math>[<ins style="font-weight: bold; text-decoration: none;">FIX THIS REF</ins>]. In such cases, if <math>x_1</math> and <math>x_2</math> are treated as vectors, then we can write:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>f(x_1) = c^Tx_1</math></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>f(x_1) = c^Tx_1</math></div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1961&oldid=prevRalph Wang: /* Problem Formulation */ minor formatting edit2020-11-30T19:20:47Z<p><span dir="auto"><span class="autocomment">Problem Formulation: </span> minor formatting edit</span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 15:20, 30 November 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l26">Line 26:</td>
<td colspan="2" class="diff-lineno">Line 26:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\text{s.t.}\;&g(u, x) \leq 0 \end{align}</math></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\text{s.t.}\;&g(u, x) \leq 0 \end{align}</math></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions. In the first stage, only the urgent, here-and-now decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining, wait-and-see decisions are decided. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1 in S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions. In the first stage, only the urgent, here-and-now decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining, wait-and-see decisions are decided. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1<ins style="font-weight: bold; text-decoration: none;"></math> </ins>in <ins style="font-weight: bold; text-decoration: none;"><math></ins>S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align} \text{min}(x_1)\text{ max}(u)\text{ min}(x_2)\;&f(u, x_1, x_2)\\</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align} \text{min}(x_1)\text{ max}(u)\text{ min}(x_2)\;&f(u, x_1, x_2)\\</div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1960&oldid=prevRalph Wang: /* Problem Formulation */ added a few words noting the time-separation of decision stages2020-11-30T19:19:21Z<p><span dir="auto"><span class="autocomment">Problem Formulation: </span> added a few words noting the time-separation of decision stages</span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 15:19, 30 November 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l26">Line 26:</td>
<td colspan="2" class="diff-lineno">Line 26:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\text{s.t.}\;&g(u, x) \leq 0 \end{align}</math></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\text{s.t.}\;&g(u, x) \leq 0 \end{align}</math></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions<del style="font-weight: bold; text-decoration: none;">: in </del>the first stage, only <del style="font-weight: bold; text-decoration: none;">a subset of </del>the decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining <del style="font-weight: bold; text-decoration: none;">set of </del>decisions <del style="font-weight: bold; text-decoration: none;">need to be made</del>. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1 in S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions<ins style="font-weight: bold; text-decoration: none;">. In </ins>the first stage, only the <ins style="font-weight: bold; text-decoration: none;">urgent, here-and-now </ins>decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining<ins style="font-weight: bold; text-decoration: none;">, wait-and-see </ins>decisions <ins style="font-weight: bold; text-decoration: none;">are decided</ins>. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1 in S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align} \text{min}(x_1)\text{ max}(u)\text{ min}(x_2)\;&f(u, x_1, x_2)\\</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math>\begin{align} \text{min}(x_1)\text{ max}(u)\text{ min}(x_2)\;&f(u, x_1, x_2)\\</div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1959&oldid=prevRalph Wang: Moved references at the end of the sentence to after the period.2020-11-30T18:58:12Z<p>Moved references at the end of the sentence to after the period.</p>
<a href="https://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1959&oldid=1735">Show changes</a>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1735&oldid=prevRalph Wang: /* Algorithms and Methodology */ Minor tweaks and edits2020-11-24T16:01:49Z<p><span dir="auto"><span class="autocomment">Algorithms and Methodology: </span> Minor tweaks and edits</span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 12:01, 24 November 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l91">Line 91:</td>
<td colspan="2" class="diff-lineno">Line 91:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with the same number of vertices as <math>b(u)</math>'s dimension<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref>. In a 2016 paper, Ben-Tal and colleagues noted that whenever the <math>A(u)</math> matrices are independent of <math>u</math>, then a <i>piecewise</i> ADR can be optimal, albeit one with a large number of pieces<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref>. ADRs can be optimal in other, more specific cases, but these cases will not be discussed here<ref>Iancu, D.A., Parrilo, P.A.(2010). Optimality of Affine Policies in Multistage Robust Optimization. <i>Mathematics of Operations Research</i> 35(2):363-394</ref><ref>Dan A. Iancu, Mayank Sharma, Maxim Sviridenko (2013) Supermodularity and Affine Policies in Dynamic Robust Optimization. <i>Operations Research</i> 61(4):941-956</ref>.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with the same number of vertices as <math>b(u)</math>'s dimension<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref>. In a 2016 paper, Ben-Tal and colleagues noted that whenever the <math>A(u)</math> matrices are independent of <math>u</math>, then a <i>piecewise</i> ADR can be optimal, albeit one with a large number of pieces<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref>. ADRs can be optimal in other, more specific cases, but these cases will not be discussed here<ref>Iancu, D.A., Parrilo, P.A.(2010). Optimality of Affine Policies in Multistage Robust Optimization. <i>Mathematics of Operations Research</i> 35(2):363-394</ref><ref>Dan A. Iancu, Mayank Sharma, Maxim Sviridenko (2013) Supermodularity and Affine Policies in Dynamic Robust Optimization. <i>Operations Research</i> 61(4):941-956</ref>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">Still</del>, ADRs are <del style="font-weight: bold; text-decoration: none;">usually </del>suboptimal. If the goal is to minimize the objective function, then <del style="font-weight: bold; text-decoration: none;">the true optimal objective cannot exceed the optimal objective generated under </del>ADRs<del style="font-weight: bold; text-decoration: none;">, that is, applying ADRs actually </del>gives an upper bound <del style="font-weight: bold; text-decoration: none;">to the optimal solution. Thus</del>, <del style="font-weight: bold; text-decoration: none;">one approach to characterize </del>the <del style="font-weight: bold; text-decoration: none;">suboptimality of ADRs is to generate a lower bound </del>to <del style="font-weight: bold; text-decoration: none;">the optimal solution, and the difference between the upper and </del>lower bounds <del style="font-weight: bold; text-decoration: none;">can be used to estimate suboptimality. Several approaches exist for doing so</del>. A simple approach is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), <del style="font-weight: bold; text-decoration: none;">and to </del>solve the model at each of these points. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref>. This method, although simple, <del style="font-weight: bold; text-decoration: none;">suffers from being </del>excessively optimistic unless a large number of points are sampled, but <del style="font-weight: bold; text-decoration: none;">trying to solve </del>the model at <del style="font-weight: bold; text-decoration: none;">a large number of </del>points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the <del style="font-weight: bold; text-decoration: none;">optimality </del>and computation time <del style="font-weight: bold; text-decoration: none;">of </del>this method<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref>. Bertsimias and De Ruiter <del style="font-weight: bold; text-decoration: none;">improve on this method by </del>constructing the dual and sampling the dual<del style="font-weight: bold; text-decoration: none;">'s </del>uncertainty set <del style="font-weight: bold; text-decoration: none;">to improve </del>computation time <del style="font-weight: bold; text-decoration: none;">and approximation precision</del><ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref>. </div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">In most cases, however</ins>, ADRs are suboptimal<ins style="font-weight: bold; text-decoration: none;">, and it becomes useful to characterize its degree of suboptimality. The most common approach is to generate upper and lower bounds on the optimal value of the objective function</ins>. If the goal is to minimize the objective function, then <ins style="font-weight: bold; text-decoration: none;">any valid solution (via </ins>ADRs <ins style="font-weight: bold; text-decoration: none;">or some other method) </ins>gives an upper bound, <ins style="font-weight: bold; text-decoration: none;">so </ins>the <ins style="font-weight: bold; text-decoration: none;">problem reduces </ins>to <ins style="font-weight: bold; text-decoration: none;">computing </ins>lower bounds. A simple approach <ins style="font-weight: bold; text-decoration: none;">to doing so </ins>is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), solve the model at each of these points<ins style="font-weight: bold; text-decoration: none;">, and find the worst case among these sampled solutions</ins>. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref>. This method, although simple, <ins style="font-weight: bold; text-decoration: none;">generates </ins>excessively optimistic <ins style="font-weight: bold; text-decoration: none;">lower bounds </ins>unless a large number of points are sampled, but <ins style="font-weight: bold; text-decoration: none;">solving </ins>the model at <ins style="font-weight: bold; text-decoration: none;">many such </ins>points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the <ins style="font-weight: bold; text-decoration: none;">lower bound quality </ins>and computation time <ins style="font-weight: bold; text-decoration: none;">for </ins>this method<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref>. <ins style="font-weight: bold; text-decoration: none;">For example, </ins>Bertsimias and De Ruiter <ins style="font-weight: bold; text-decoration: none;">discovered that </ins>constructing the dual and sampling the dual uncertainty set <ins style="font-weight: bold; text-decoration: none;">gives better bounds and faster </ins>computation time<ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref>. </div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The other important assumption in the given solution methodology is the <i>fixed recourse condition</i>, that <math>A_2(u)</math> is fixed to some matrix <math>V</math>. If this is not true, that <math>A_2(u)</math> is instead some affine function of <math>u</math>, then even under the ADR assumption, the problem is intractable<ref>Guslitser, E. (2002). Uncertainty-Immunized Solutions in Linear Programming (Master’s Thesis, Technion-Israel Institute of Technology).</ref>. However, Ben-Tal has proposed a tight approximation method for <del style="font-weight: bold; text-decoration: none;">when </del>the uncertainty set <math>U</math> is the intersection of ellipsoidal sets, an approximation that becomes exact if <math>U</math> is an ellipsoidal set<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref>.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The other important assumption in the given solution methodology is the <i>fixed recourse condition</i>, that <math>A_2(u)</math> is fixed to some matrix <math>V</math>. If this is not true, that <math>A_2(u)</math> is instead some affine function of <math>u</math>, then even under the ADR assumption, the problem is intractable<ref>Guslitser, E. (2002). Uncertainty-Immunized Solutions in Linear Programming (Master’s Thesis, Technion-Israel Institute of Technology).</ref>. However, Ben-Tal has proposed a tight approximation method for <ins style="font-weight: bold; text-decoration: none;">cases where </ins>the uncertainty set <math>U</math> is the intersection of ellipsoidal sets, an approximation that becomes exact if <math>U</math> <ins style="font-weight: bold; text-decoration: none;">itself </ins>is an ellipsoidal set<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Numerical Example==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Numerical Example==</div></td></tr>
</table>Ralph Wanghttps://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=1734&oldid=prevRalph Wang: /* Algorithms and Methodology */ minor edits, added refs2020-11-24T15:34:06Z<p><span dir="auto"><span class="autocomment">Algorithms and Methodology: </span> minor edits, added refs</span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 11:34, 24 November 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l89">Line 89:</td>
<td colspan="2" class="diff-lineno">Line 89:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The above solution method, although simple and tractable, suffers from potential sub optimality of ADR. Indeed, Ben-Tal motivates this assumption citing only the tractability of the result. In real world scenarios, this sub optimality can be mitigated by using ADR to make the initial decision, then resolving the problem after <math>u</math> is revealed. That is, if solving the L2ARO gives <math>x_1^*</math> as the optimal <math>x_1</math> and <math>x_2^*(u)</math> as the optimal <math>x_2(u)</math>, decision <math>x_1^*</math> is implemented immediately; when <math>u</math> is revealed (to be, say, <math>u^*</math>), decision <math>x_2</math> is decided not by computing <math>x_2^*(u^*)</math>, but by re-solving the whole problem fixing <math>x_1</math> to <math>x_1^*</math> and fixing <math>u</math> to <math>u^*</math>. This method reflects the wait-and-see nature of the decision <math>x_2</math> - ADR is used to find a pretty-good <math>x_1</math>, then <math>u</math> is revealed, then the information is used to solve for the optimal <math>x_2</math> in that circumstance<ref>Ben-Tal, A., Golany, B., Nemirovski, A., Vial, J. (2005). Retailer-Supplier Flexible Commitments Contracts: A Robust Optimization Approach. <i>Manufacturing and Service Operations Management</i> 7(3):248-271.</ref>. This iterative, stage-by-stage solution performs better than using only ADR, but is feasible only when there is enough time between stages to re-solve the problem. Further, numerical experiments indicate that classical robust optimization models yield equally good, if not better initial decisions than ADR on L2ARO, limiting ADR on L2ARO to situations where the problem cannot be feasibly re-solved, or in the special cases where the ADR approximation actually generates the optimal solution<ref>Gorissen, B., Yanikognlu, I., den Hertog, D. (2015). A Practical Guide to Robust Optimization. <i>Omega</i> 53:124-137.</ref>.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The above solution method, although simple and tractable, suffers from potential sub optimality of ADR. Indeed, Ben-Tal motivates this assumption citing only the tractability of the result. In real world scenarios, this sub optimality can be mitigated by using ADR to make the initial decision, then resolving the problem after <math>u</math> is revealed. That is, if solving the L2ARO gives <math>x_1^*</math> as the optimal <math>x_1</math> and <math>x_2^*(u)</math> as the optimal <math>x_2(u)</math>, decision <math>x_1^*</math> is implemented immediately; when <math>u</math> is revealed (to be, say, <math>u^*</math>), decision <math>x_2</math> is decided not by computing <math>x_2^*(u^*)</math>, but by re-solving the whole problem fixing <math>x_1</math> to <math>x_1^*</math> and fixing <math>u</math> to <math>u^*</math>. This method reflects the wait-and-see nature of the decision <math>x_2</math> - ADR is used to find a pretty-good <math>x_1</math>, then <math>u</math> is revealed, then the information is used to solve for the optimal <math>x_2</math> in that circumstance<ref>Ben-Tal, A., Golany, B., Nemirovski, A., Vial, J. (2005). Retailer-Supplier Flexible Commitments Contracts: A Robust Optimization Approach. <i>Manufacturing and Service Operations Management</i> 7(3):248-271.</ref>. This iterative, stage-by-stage solution performs better than using only ADR, but is feasible only when there is enough time between stages to re-solve the problem. Further, numerical experiments indicate that classical robust optimization models yield equally good, if not better initial decisions than ADR on L2ARO, limiting ADR on L2ARO to situations where the problem cannot be feasibly re-solved, or in the special cases where the ADR approximation actually generates the optimal solution<ref>Gorissen, B., Yanikognlu, I., den Hertog, D. (2015). A Practical Guide to Robust Optimization. <i>Omega</i> 53:124-137.</ref>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with the same number of vertices as <math>b(u)</math>'s dimension<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref>. In a 2016 paper, Ben-Tal and colleagues noted that <del style="font-weight: bold; text-decoration: none;">if the only restrictions were that </del>the <math>A(u)</math> matrices are independent of <math>u</math>, then <i>piecewise</i> <del style="font-weight: bold; text-decoration: none;">ADRs </del>can be optimal, <del style="font-weight: bold; text-decoration: none;">although there may be </del>a large number of pieces<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref>. ADRs can be optimal in other, more specific cases, but these cases will not be discussed here.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with the same number of vertices as <math>b(u)</math>'s dimension<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref>. In a 2016 paper, Ben-Tal and colleagues noted that <ins style="font-weight: bold; text-decoration: none;">whenever </ins>the <math>A(u)</math> matrices are independent of <math>u</math>, then <ins style="font-weight: bold; text-decoration: none;">a </ins><i>piecewise</i> <ins style="font-weight: bold; text-decoration: none;">ADR </ins>can be optimal, <ins style="font-weight: bold; text-decoration: none;">albeit one with </ins>a large number of pieces<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref>. ADRs can be optimal in other, more specific cases, but these cases will not be discussed here<ins style="font-weight: bold; text-decoration: none;"><ref>Iancu, D.A., Parrilo, P.A.(2010). Optimality of Affine Policies in Multistage Robust Optimization. <i>Mathematics of Operations Research</i> 35(2):363-394</ref><ref>Dan A. Iancu, Mayank Sharma, Maxim Sviridenko (2013) Supermodularity and Affine Policies in Dynamic Robust Optimization. <i>Operations Research</i> 61(4):941-956</ref></ins>.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Still, ADRs are usually suboptimal. If the goal is to minimize the objective function, then the true optimal objective cannot exceed the optimal objective generated under ADRs, that is, applying ADRs actually gives an upper bound to the optimal solution. Thus, one approach to characterize the suboptimality of ADRs is to generate a lower bound to the optimal solution, and the difference between the upper and lower bounds can be used to estimate suboptimality. Several approaches exist for doing so. A simple approach is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), and to solve the model at each of these points. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref>. This method, although simple, suffers from being excessively optimistic unless a large number of points are sampled, but trying to solve the model at a large number of points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the optimality and computation time of this method<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref>. Bertsimias and De Ruiter improve on this method by constructing the dual and sampling the dual's uncertainty set to improve computation time and approximation precision<ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref>. </div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Still, ADRs are usually suboptimal. If the goal is to minimize the objective function, then the true optimal objective cannot exceed the optimal objective generated under ADRs, that is, applying ADRs actually gives an upper bound to the optimal solution. Thus, one approach to characterize the suboptimality of ADRs is to generate a lower bound to the optimal solution, and the difference between the upper and lower bounds can be used to estimate suboptimality. Several approaches exist for doing so. A simple approach is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), and to solve the model at each of these points. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref>. This method, although simple, suffers from being excessively optimistic unless a large number of points are sampled, but trying to solve the model at a large number of points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the optimality and computation time of this method<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref>. Bertsimias and De Ruiter improve on this method by constructing the dual and sampling the dual's uncertainty set to improve computation time and approximation precision<ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref>. </div></td></tr>
</table>Ralph Wang