Adaptive robust optimization

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Revision as of 13:10, 22 November 2020 by Ralph Wang (talk | contribs) (Typed out most of the page, references remain to be inserted)
Jump to navigation Jump to search

Author: Ralph Wang (ChemE 6800 Fall 2020)

Steward: Allen Yang, Fengqi You

Introduction

Adaptive Robust Optimization (ARO), also known as adjustable robust optimization, models situations where decision makers make two types of decisions: here-and-now decisions that must be made immediately, and wait-and-see decisions that can be made at some point in the future[ref Ben-Tal 2004]. ARO improves on the robust optimization framework by accounting for any information the decision maker does not know now, but may learn before making future decisions. In the real-world, ARO is applicable whenever past decisions and new information together influence future decisions. Common applications include power systems control, inventory management, shift scheduling, and other resource allocation problems[ref various papers].

Compared to traditional robust optimization models, ARO gives less conservative and more realistic solutions, however, this improvement comes at the cost of computation time[ref basically any paper]. Indeed, even the general linear ARO with linear uncertainty is proven computationally intractable[ref Ben-Tal 2004]. However, researchers have developed a wide variety of solution and approximation methods for specific industrial ARO problems over the past 15 years and the field continues to grow rapidly [ref various papers].

Problem Formulation

Suppose, for an optimization problem of interest, is the set of allowed decisions and is a decision in . Let bee a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function , and we want to adhere to a set of constraints , then the problem may be formulated as:

Or more simply:

In this formulation, we call the objective function and the constraint function. If is known, then the problem can be solved using methods such as branch and cut or KKT conditions. However, in many real world scenarios, is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of , called the uncertainty set , and solves for the decision such that the constraint is satisfied in all cases and is optimized for the worst case. The problem can be written as:

Adaptive robust optimization expands this robust optimization framework by separating the decision into multiple stages. For simplicity, assume there are two stages of decisions: in the first stage, only a subset of the decisions are made. After these decisions are made, the true values of the parameters are revealed, then the remaining set of decisions need to be made. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be and the set of possible decisions in the second stage be , so that the objective and constraint functions become functions of the parameters , the first stage decision (), and the second stage decision ( in ). Then, we can formulate the problem as:

The reasoning used in this construction can be extended to multi-stage formulations.

In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because is selected only after the uncertain parameter is revealed, is a function of . Then, the function is independent of , so the function can be decided before is revealed. This allows us to rewrite the problem as:

And if we introduce a variable , then we can rewrite the problem as:

Which allows us to remove from the objective function. Since represents all the variables the decide immediately, can be collapsed into ; similarly, the first constraint can be collapsed into the second. This generates the formulation most commonly seen in the literature (up to a change of variable names and functions and ):

Where was redefined to be the part of representing .

For many problems of interest, the functions and vary linearly with and , that is, they are affine functions of and [ref various papers]. In such cases, if and are treated as vectors, then we can write:

Where is some vector, and

Where the 's are matrices and is a vector, to give the linear, two-stage ARO (L2ARO):

This L2ARO will be the primary focus of the Algorithms section.

Algorithms and Methodology

General ARO problems are computationally intractable[ref basically any paper]. Taking the L2ARO for example, deriving the optimal function poses a tremendous challenge for many choices of uncertainty set . If is large or infinite, or is non convex, deciding what should be for each in may take a long time. In real world applications, then, the uncertainty set must be chosen carefully to include a representative set of possible parameter values for , but it must not be too large or complex and render the problem intractable. The L2ARO model has been proven tractable only for simple uncertainty sets or with restrictions imposed on the function [ref Ben-Tal 2004, other extensions of the work]. Therefore, ARO problems are usually solved on a case by case basis, using methods such as multi-level optimization, branch-and-cut, and decomposition[ref various papers]. This section will first present the L2ARO solution method using the affine decision rule approximation under fixed recourse conditions from Ben-Tal's 2004 paper [ref], then discuss how this method might be extended to other L2ARO problems.

General L2ARO problems were first proven intractable by Guslitser, in his master's thesis[ref Guslitser master's thesis]. Ben-Tal took this result and suggested simplifying the problem by restricting to vary linearly with , that is,

Where is a vector and is a matrix, both variable. This simplification is known as the affine decision rule (ADR). To further simplify the problem, Ben-Tal proposed that the matrix be fixed to some matrix (fixed recourse conditions), and make and affine functions of :

Where and are fixed vectors and and are fixed matrices. Then, the overall problem can be rewritten:

Now, both the objective function and constraint function are affine functions of , , and , so the problem has been reduced to a simple robust linear program, for which solution methods already exist.

The above solution method, although simple and tractable, suffers from potential sub optimality of ADR. Indeed, Ben-Tal motivates this assumption citing only the tractability of the result. In real world scenarios, this sub optimality can be mitigated by using ADR to make the initial decision, then resolving the problem after is revealed. That is, if solving the L2ARO gives as the optimal and as the optimal , decision is implemented immediately; when is revealed (to be, say, ), decision is decided not by computing , but by re-solving the whole problem fixing to and fixing to . This method reflects the wait-and-see nature of the decision - ADR is used to find a pretty-good , then is revealed, then the information is used to solve for the optimal in that circumstance [ref that paper]. This iterative, stage-by-stage solution performs better than using only ADR, but is feasible only when there is enough time between stages to re-solve the problem. Further, numerical experiments indicate that classical robust optimization models yield equally good, if not better initial decisions than ADR on L2ARO, limiting ADR on L2ARO to situations where the problem cannot be feasibly re-solved, or in the special cases where the ADR approximation actually generates the optimal solution[ref that paper].

This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both matrices are independent of , and are restricted to vectors with nonnegative entries, and is restricted to be vectors with nonpositive entries, then ADRs are optimal if is restricted to a polyhedral set with the same number of vertices as 's dimension[ref B+G 2012]. In a 2016 paper, Ben-Tal and colleagues noted that if the only restrictions were that the matrices are independent of , then piecewise ADRs can be optimal, although there may be a large number of pieces [ref Ben-Tal et al 2016]. ADRs can be optimal in other, more specific cases, but these cases will not be discussed here.

Still, ADRs are usually suboptimal. If the goal is to minimize the objective function, then the true optimal objective cannot exceed the optimal objective generated under ADRs, that is, applying ADRs actually gives an upper bound to the optimal solution. Thus, one approach to characterize the suboptimality of ADRs is to generate a lower bound to the optimal solution, and the difference between the upper and lower bounds can be used to estimate suboptimality. Several approaches exist for doing so. A simple approach is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), and to solve the model at each of these points. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective. This method, although simple, suffers from being excessively optimistic unless a large number of points are sampled, but trying to solve the model at a large number of points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the optimality and computation time of this method. Other methods seek to extend duality theory to the L2ARO problem to find maximal lower bounds, to various degrees of complexity and optimality.

The other important assumption in the given solution methodology is the fixed recourse condition, that is fixed to some matrix . If this is not true, that is instead some affine function of , then even under the ADR assumption, the problem is intractable [ref Gusliter]. However, Ben-Tal has proposed a tight approximation method for when the uncertainty set is the intersection of ellipsoidal sets, an approximation that becomes exact if is an ellipsoidal set.

Numerical Example

Consider a simple inventory management problem over two business periods involving one product that loses all value at the end of that period. Let the storage cost of every unused unit of product be per business period. Let the unit price of the product be be in the first business period and in the second period. Let the demand in each period be uncertain, but given the following information:

  1. Both demands are between and units.
  2. The total demand over the two months is units.

The problem is that the manager must decide the quantity of each product to purchase at the start of each business period minimizing storage and purchasing costs. If we denote the demand in the first business period (so the demand in the second period is ) and the quantity purchased in the th period , then we can formulate this as an L2ARO as follows:

The uncertain parameter is , and the uncertainty set is the closed interval from to . The first stage decision is for and ; the second stage decision is . Rewriting as a function of and rearranging into matrix form:

Applying the affine decision rule , noting that and are matrices, gives:

Which rearranges to:

Which is a robust linear program. A famous theorem of robust linear programming states that when the uncertainty set places only linear restrictions on the uncertain parameter, it suffices to consider only the extreme points of the uncertainty set [cite stanford notes? find a citation for this]. In this case, it suffices to check the constraint only for and . Writing down the constraints for both values gives a deterministic linear program:

Which can be solved using the Simplex Algorithm. The solution is for a total cost of . The solution corresponds to buying all 150 demand units for the two periods at start of the first period. This solution makes intuitive sense because the cost of buying at the start of the second period costs more, but storing any extra units from the first period costs only , so in any case, the price increase outweighs the storage cost. This is indeed the optimal solution to the problem.

Note that the ADR approximation actually found the optimal solution in this case. This is not surprising, since the optimal solution required to be zero in all cases, which is an affine function of the demand.

Applications

Applications of adaptive robust optimization typically involve multi-stage allocation of resources under uncertain supply or demand, including problems in energy systems, inventory management, and shift scheduling.

Energy Systems

Energy systems aim to meet energy demand while minimizing costs. An energy system may involve multiple units that can each be turned on or off, with corresponding startup, shut down, and operation costs. A coal plant may be expensive to start up and shut down, but be cheap to operate, for example, while a solar farm may be easier to start up and shut down but more difficult to maintain. Let each day be partitioned into different blocks of time, and suppose the problem was to determine the optimal combination of units to run during each time block on a given day. The unit combination for the first block of time must be decided immediately, but decisions for the subsequent time blocks can wait until after learning the energy demand in preceding time blocks. However, past decisions influence future decisions - starting up the coal plant in the first time block allows it to produce cheap energy all day, potentially reducing a reliance on other energy sources. Such a decision structure, where past decisions and new information guide future decisions, lends itself naturally to an ARO model. The decision is the combination of units to run in each time block, the uncertainty set is the set of possible energy demands for each time block, the constraint is that the power produced meets demand for each time block, and the objective would be to minimize the total startup, shut down, and operation costs for the day. For more detailed treatments of ARO applied to energy systems, we refer the reader to [insert sources here].

Inventory Management

The inventory management problem seeks to purchase goods at regular intervals and store them such that there’s always enough product on hand to satisfy demand while minimizing purchase and storage costs. Purchasing large amounts when the prices are low saves on purchasing costs later on, but this incurs large storage costs. On the other extreme, keeping too little inventory risks running out of stock or requiring large purchases at inconvenient times. If an inventory planner wanted to plan purchases for the next time blocks (for simplicity assuming purchases take place only at the start of each time block), then he must immediately decide how much to purchase for the first time block, then use each past time block’s prices and demands to decide how much to buy at future time blocks. As in the case of energy systems, the inventory management problem has a staggered decision structure where past decisions and new information inform future decisions, and can be translated naturally into an ARO model. The decisions are the quantities of product to purchase at the start of each time block, the uncertainty set is the set of possible prices and demands for each of the time blocks, the constraint is that the inventory has enough stock to meet demand in every time block, and the objective is to minimize purchasing and storage costs. For more detailed analyses of the inventory management problem, we refer readers to [insert sources].

Shift Scheduling

The shift scheduling problem involves carefully choosing a shift for each employee such that the operation center has enough staff at all times. For example, a customer service line would ideally like to predict the frequency and length of calls at every hour of the day so it employs exactly enough operators to handle the calls, however, call volume is hard to predict and call centers often end up overstaffed or understaffed, so that the company may spend excess on paying workers or deliver unsatisfactory customer service. However, Mattia and colleagues note that consecutive periods of high volume calls are not independent, so that past call volumes help predict future call volumes, so they formulated the shift scheduling problem as a two-stage ARO. In the first stage (over the weekend), employees shifts are laid out for the workweek; in the second stage (during the week), employees are allocated to different jobs in the office. The uncertainty set is the set of possible call volume distributions through the week (represented as the deviations in the number of staff for handling all other jobs), the constraint is that enough staff is present to handle the various office jobs, and the objective is to minimize the total cost of hiring the employees for the hours they work. For more detail on this problem, we refer readers to [insert reference].

Conclusion Adaptive robust optimization models multi-stage decision making where past decisions affect future decision making, but new information is learned between decision stages. It finds less conservative solutions than traditional robust optimization without sacrificing robustness, at the expense of simplicity and computation time. Many ARO problems are computationally intractable, but ARO problems have also been solved for many specific problems in the field, and will continue to grow in the coming decades.