Adaptive robust optimization: Difference between revisions
Ralph Wang (talk | contribs) No edit summary |
Ralph Wang (talk | contribs) No edit summary |
||
Line 13: | Line 13: | ||
<math>\begin{align}\text{minimize, choosing x: } f&(u, x)\\ | <math>\begin{align}\text{minimize, choosing x: } f&(u, x)\\ | ||
\text{under constraints: } g&(u, x) \leq 0\end{align}</math> | \text{under constraints: } g&(u, x) \leq 0\end{align}</math> | ||
Or more simply: | |||
<math>\begin{align}\text{min}(x) \; &f(u, x)\\ | |||
\text{s.t. } \; &g(u, x) \leq 0\end{align}</math> | |||
In this formulation, we call <math>f</math> the objective function and <math>g</math> the constraint function. | |||
If <math>u</math> is known, then the problem can be solved using methods such as branch and cut or KKT conditions. However, in many real world scenarios, <math>u</math> is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of <math>u</math>, called the uncertainty set <math>U</math>, and solves for the decision <math>x</math> such that the constraint <math>g</math> is satisfied in all cases and <math>f</math> is optimized for the worst case. The problem can be written as: | |||
<math>\begin{align}\text{min}(x)\text{ max}(u)\;&f(u, x)\\ | |||
\text{s.t.}\;&g(u, x) \leq 0 \end{align}</math> | |||
Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1 in S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as: | |||
<math>\begin{align}\text{min}(x_1)\text{ max}(u)\text{ x_2} \; &f(u, x_1, x_2)\\ | |||
\text{s.t.}\;&g(u, x_1, x_2) \leq 0 \; \text{for all} u \text{in} U\end{align}</math> |
Revision as of 07:27, 22 November 2020
Author: Ralph Wang (ChemE 6800 Fall 2020)
Steward: Allen Yang, Fengqi You
Introduction
Adaptive Robust Optimization (ARO), also known as adjustable robust optimization, models situations where decision makers make two types of decisions: here-and-now decisions that must be made immediately, and wait-and-see decisions that can be made at some point in the future. ARO improves on the robust optimization framework by accounting for any information the decision maker does not know now, but may learn before making future decisions. In the real-world, ARO is applicable whenever past decisions and new information together influence future decisions. Common applications include power systems control, inventory management, shift scheduling, and other resource allocation problems.
Compared to traditional robust optimization models, ARO gives less conservative and more realistic solutions, however, this improvement comes at the cost of computation time. Indeed, even the general linear ARO with linear uncertainty is proven computationally intractable. However, researchers have developed a wide variety of solution and approximation methods for specific industrial ARO problems over the past 15 years and the field continues to grow rapidly.
Problem Formulation
Suppose, for an optimization problem of interest, is the set of allowed decisions and is a decision in . Let bee a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function , and we want to adhere to a set of constraints , then the problem may be formulated as:
Or more simply:
In this formulation, we call the objective function and the constraint function. If is known, then the problem can be solved using methods such as branch and cut or KKT conditions. However, in many real world scenarios, is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of , called the uncertainty set , and solves for the decision such that the constraint is satisfied in all cases and is optimized for the worst case. The problem can be written as:
Adaptive robust optimization expands this robust optimization framework by separating the decision into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be and the set of possible decisions in the second stage be , so that the objective and constraint functions become functions of the parameters , the first stage decision (), and the second stage decision ( in ). Then, we can formulate the problem as:
Failed to parse (unknown function "\begin{align}"): {\displaystyle \begin{align}\text{min}(x_1)\text{ max}(u)\text{ x_2} \; &f(u, x_1, x_2)\\ \text{s.t.}\;&g(u, x_1, x_2) \leq 0 \; \text{for all} u \text{in} U\end{align}}