Adaptive robust optimization

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Revision as of 08:55, 22 November 2020 by Ralph Wang (talk | contribs) (Typed out problem formulation section)
Jump to navigation Jump to search

Author: Ralph Wang (ChemE 6800 Fall 2020)

Steward: Allen Yang, Fengqi You

Introduction

Adaptive Robust Optimization (ARO), also known as adjustable robust optimization, models situations where decision makers make two types of decisions: here-and-now decisions that must be made immediately, and wait-and-see decisions that can be made at some point in the future. ARO improves on the robust optimization framework by accounting for any information the decision maker does not know now, but may learn before making future decisions. In the real-world, ARO is applicable whenever past decisions and new information together influence future decisions. Common applications include power systems control, inventory management, shift scheduling, and other resource allocation problems.

Compared to traditional robust optimization models, ARO gives less conservative and more realistic solutions, however, this improvement comes at the cost of computation time. Indeed, even the general linear ARO with linear uncertainty is proven computationally intractable. However, researchers have developed a wide variety of solution and approximation methods for specific industrial ARO problems over the past 15 years and the field continues to grow rapidly.

Problem Formulation

Suppose, for an optimization problem of interest, is the set of allowed decisions and is a decision in . Let bee a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function , and we want to adhere to a set of constraints , then the problem may be formulated as:

Or more simply:

In this formulation, we call the objective function and the constraint function. If is known, then the problem can be solved using methods such as branch and cut or KKT conditions. However, in many real world scenarios, is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of , called the uncertainty set , and solves for the decision such that the constraint is satisfied in all cases and is optimized for the worst case. The problem can be written as:

Adaptive robust optimization expands this robust optimization framework by separating the decision into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be and the set of possible decisions in the second stage be , so that the objective and constraint functions become functions of the parameters , the first stage decision (), and the second stage decision ( in ). Then, we can formulate the problem as:

The reasoning used in this construction can be extended to multi-stage formulations.

In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because is selected only after the uncertain parameter is revealed, is a function of . Then, the function is independent of , so the function can be decided before is revealed. This allows us to rewrite the problem as:

And if we introduce a variable , then we can rewrite the problem as:

Which allows us to remove from the objective function. Since represents all the variables the decide immediately, can be collapsed into ; similarly, the first constraint can be collapsed into the second. This generates the formulation most commonly seen in the literature (up to a change of variable names and functions and ):

Where was redefined to be the part of representing .

For many problems of interest, the functions and vary linearly with and , that is, they are affine functions of and . In such cases, if and are treated as vectors, then we can write:

Where is some vector, and

Where the 's are matrices and is a vector, to give the linear, two-stage ARO (L2ARO):

This L2ARO will be the primary focus of the Algorithms section.