Adaptive robust optimization: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
(Typed out problem formulation section)
Line 28: Line 28:
Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1 in S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:
Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1 in S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:


<math>\begin{align}\text{min}(x_1)\text{ max}(u)\text{ x_2} \; &f(u, x_1, x_2)\\
<math>\begin{align} \text{min}(x_1)\text{ max}(u)\text{ min}(x_2)\;&f(u, x_1, x_2)\\
\text{s.t.}\;&g(u, x_1, x_2) \leq 0 \; \text{for all} u \text{in} U\end{align}</math>
\text{s.t.}\;\;\;&g(u, x_1, x_2) \leq 0 \; \text{for all } u \text{ in } U\end{align}</math>
 
The reasoning used in this construction can be extended to multi-stage formulations.
 
In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because <math>x_2</math> is selected only after the uncertain parameter <math>u</math> is revealed, <math>x_2</math> is a function of <math>u</math>. Then, the function <math>x_2(u)</math> is independent of <math>u</math>, so the function can be decided before <math>u</math> is revealed. This allows us to rewrite the problem as:
 
<math>\begin{align}\text{min}(x_1, x_2(u))\text{ max}(u) \; &f(u, x_1, x_2(u))\\
\text{s.t.} \; &g(u, x_1, x_2(u)) \leq 0 \; \text{for all } u \text{ in } U \end{align}</math>
 
And if we introduce a variable <math>t = \text{max}(u)\;f(u, x_1, x_2(u))</math>, then we can rewrite the problem as:
 
<math>\begin{align}\text{min}(x_1, x_2(u), t)\;\;&t\\
\text{s.t.} \; &f(u, x_1, x_2(u)) \leq t \text{ for all }u\text{ in }U\\
&g(u, x_1, x_2(u)) \leq 0 \text{ for all }u\text{ in }U\end{align}</math>
 
Which allows us to remove <math>u</math> from the objective function. Since <math>x_1</math> represents all the variables the decide immediately, <math>t</math> can be collapsed into <math>x_1</math>; similarly, the first constraint can be collapsed into the second. This generates the formulation most commonly seen in the literature (up to a change of variable names and functions <math>f</math> and <math>g</math>):
 
<math>\begin{align}\text{min}(x_1, x_2(u)) \;&f(x_1)\\
\text{s.t.}\; &g(u, x_1, x_2(u)) \leq 0 \text{ for all }u\text{ in }U\end{align}</math>
 
Where <math>f(x_1)</math> was redefined to be the part of <math>x_1</math> representing <math>t</math>.
 
For many problems of interest, the functions <math>f</math> and <math>g</math> vary linearly with <math>x_1</math> and <math>x_2</math>, that is, they are affine functions of <math>x_1</math> and <math>x_2</math>. In such cases, if <math>x_1</math> and <math>x_2</math> are treated as vectors, then we can write:
 
<math>f(x_1) = c^Tx_1</math>
 
Where <math>c</math> is some vector, and
 
<math>g(u, x_1, x_2) = A_1(u)x_1 + A_2(u)x_2(u) - b(u)</math>
 
Where the <math>A(u)</math>'s are matrices and <math>b(u)</math> is a vector, to give the linear, two-stage ARO (<i>L2ARO</i>):
 
<math>\begin{align}\text{min}(x_1, x_2(u))\;&c^Tx_1\\
\text{s.t.}\;&A_1(u)x_1 + A_2(u)x_2(u) \leq b(u)\;\text{ for all }u\text{ in }U\end{align}</math>
 
This L2ARO will be the primary focus of the Algorithms section.

Revision as of 07:55, 22 November 2020

Author: Ralph Wang (ChemE 6800 Fall 2020)

Steward: Allen Yang, Fengqi You

Introduction

Adaptive Robust Optimization (ARO), also known as adjustable robust optimization, models situations where decision makers make two types of decisions: here-and-now decisions that must be made immediately, and wait-and-see decisions that can be made at some point in the future. ARO improves on the robust optimization framework by accounting for any information the decision maker does not know now, but may learn before making future decisions. In the real-world, ARO is applicable whenever past decisions and new information together influence future decisions. Common applications include power systems control, inventory management, shift scheduling, and other resource allocation problems.

Compared to traditional robust optimization models, ARO gives less conservative and more realistic solutions, however, this improvement comes at the cost of computation time. Indeed, even the general linear ARO with linear uncertainty is proven computationally intractable. However, researchers have developed a wide variety of solution and approximation methods for specific industrial ARO problems over the past 15 years and the field continues to grow rapidly.

Problem Formulation

Suppose, for an optimization problem of interest, is the set of allowed decisions and is a decision in . Let bee a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function , and we want to adhere to a set of constraints , then the problem may be formulated as:

Or more simply:

In this formulation, we call the objective function and the constraint function. If is known, then the problem can be solved using methods such as branch and cut or KKT conditions. However, in many real world scenarios, is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of , called the uncertainty set , and solves for the decision such that the constraint is satisfied in all cases and is optimized for the worst case. The problem can be written as:

Adaptive robust optimization expands this robust optimization framework by separating the decision into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be and the set of possible decisions in the second stage be , so that the objective and constraint functions become functions of the parameters , the first stage decision (), and the second stage decision ( in ). Then, we can formulate the problem as:

The reasoning used in this construction can be extended to multi-stage formulations.

In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because is selected only after the uncertain parameter is revealed, is a function of . Then, the function is independent of , so the function can be decided before is revealed. This allows us to rewrite the problem as:

And if we introduce a variable , then we can rewrite the problem as:

Which allows us to remove from the objective function. Since represents all the variables the decide immediately, can be collapsed into ; similarly, the first constraint can be collapsed into the second. This generates the formulation most commonly seen in the literature (up to a change of variable names and functions and ):

Where was redefined to be the part of representing .

For many problems of interest, the functions and vary linearly with and , that is, they are affine functions of and . In such cases, if and are treated as vectors, then we can write:

Where is some vector, and

Where the 's are matrices and is a vector, to give the linear, two-stage ARO (L2ARO):

This L2ARO will be the primary focus of the Algorithms section.