Stochastic programming: Difference between revisions
Line 140: | Line 140: | ||
We should note that in this case, as <math>x \ge 0</math>, <math>d \ge 0</math> and <math>y_1, y_2 \ge 0</math>, the second stage is always feasible, which means <math> x \in K_2</math> always hold true. So we could skip the feasibility cuts (step 2) in all iterations. The <em>L</em>-shaped method iterations are shown below: | We should note that in this case, as <math>x \ge 0</math>, <math>d \ge 0</math> and <math>y_1, y_2 \ge 0</math>, the second stage is always feasible, which means <math> x \in K_2</math> always hold true. So we could skip the feasibility cuts (step 2) in all iterations. The <em>L</em>-shaped method iterations are shown below: | ||
''' Iteration 1:''' | |||
''' Iteration 1:''' | |||
<em>Step 1.</em> With no <math>\theta </math>, the master program is <math>z = min\{10x_1 + 15x_2 | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2\}</math>. Result is <math>x^1 = (4, 2)^T</math>. Set <math>\theta ^1 = -\infty </math>. | <em>Step 1.</em> With no <math>\theta </math>, the master program is <math>z = min\{10x_1 + 15x_2 | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2\}</math>. Result is <math>x^1 = (4, 2)^T</math>. Set <math>\theta ^1 = -\infty </math>. | ||
Line 165: | Line 166: | ||
: Thus, <math>w^1 = -5.2 - (8.352, 18.048) \cdot x^1 = -74.704 > \theta ^1 = -\infty </math>, we add the cut | : Thus, <math>w^1 = -5.2 - (8.352, 18.048) \cdot x^1 = -74.704 > \theta ^1 = -\infty </math>, we add the cut | ||
:<math>8.352x_1 + 18.048x_2 + \theta \ge -5.2</math> | :<math>8.352x_1 + 18.048x_2 + \theta \ge -5.2</math> | ||
''' Iteration 2:''' | ''' Iteration 2:''' | ||
<em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2\}</math>. Result is <math>z = -22.992, x^2 = (4, 8)^T, \theta ^2 = -182.992</math>. | <em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2\}</math>. Result is <math>z = -22.992, x^2 = (4, 8)^T, \theta ^2 = -182.992</math>. | ||
Line 192: | Line 195: | ||
''' Iteration 3:''' | ''' Iteration 3:''' | ||
<em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2, 21.12x_1 + \theta \ge -15.84\}</math>. Result is <math>z = -10.39375, x^3 = (6.6828, 5.3172)^T, \theta ^3 = -156.97994</math>. | <em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2, 21.12x_1 + \theta \ge -15.84\}</math>. Result is <math>z = -10.39375, x^3 = (6.6828, 5.3172)^T, \theta ^3 = -156.97994</math>. | ||
Line 217: | Line 221: | ||
''' Iteration 4:''' | ''' Iteration 4:''' | ||
<em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2, 21.12x_1 + \theta \ge -15.84, 11.52x_1 + 9.6x_2 + \theta \ge -21.04\}</math>. Result is <math>z = -8.895, x^4 = (4, 3.375)^T, \theta ^4 = -99.52</math>. | <em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2, 21.12x_1 + \theta \ge -15.84, 11.52x_1 + 9.6x_2 + \theta \ge -21.04\}</math>. Result is <math>z = -8.895, x^4 = (4, 3.375)^T, \theta ^4 = -99.52</math>. | ||
Line 240: | Line 245: | ||
''' Iteration 5:''' | ''' Iteration 5:''' | ||
<em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2, 21.12x_1 + \theta \ge -15.84, 11.52x_1 + 9.6x_2 + \theta \ge -21.04, 13.344x_1 + 13.056x_2 + \theta \ge 0\}</math>. Result is <math>z = -8.5583, x^5 = (4.6667, 3.625)^T, \theta ^5 = -109.6</math>. | <em>Step 1.</em> The master program is <math>z = min\{10x_1 + 15x_2 + \theta | x_1 + x_2 \le 12, x_1 \ge 4, x_2 \ge 2, 8.352x_1 + 18.048x_2 + \theta \ge -5.2, 21.12x_1 + \theta \ge -15.84, 11.52x_1 + 9.6x_2 + \theta \ge -21.04, 13.344x_1 + 13.056x_2 + \theta \ge 0\}</math>. Result is <math>z = -8.5583, x^5 = (4.6667, 3.625)^T, \theta ^5 = -109.6</math>. | ||
Revision as of 21:56, 15 December 2021
Authors: Roohi Menon, Hangyu Zhou, Gerald Ogbonna, Vikram Raghavan (SYSEN 6800 Fall 2021)
Introduction
Stochastic Programming, also referred to as Stochastic Optimization, is a mathematical framework to help decision-making processes under uncertainty. [2] With uncertainties being widespread, Stochastic Programming is a risk-neutral mathematical framework that finds application in areas, such as process systems engineering. [3] In process engineering, uncertainties are related to prices, purity of raw materials, customer demands, and yields of pilot reactors, among others. Batch processing has widely been adopted across process industries, and for the stated manufacturing process production scheduling is one of the most crucial decisions to be taken. In a deterministic optimization model, parameters, such as the process of raw materials, availability of raw materials, price of different products, operation time and cost, and order demands are considered without factoring in uncertainty. However, such assumptions are not realistic because uncertainty doesn’t come with forewarning. Hence, uncertainty needs to be factored in always. [4]
In conventional robust optimization, the assumption is made that all decision variables are realized before the realization of uncertainty. Such an approach makes the conventional robust optimization problem overly conservative. [4] Moreover, in manufacturing, not all decisions need to be made “in-the-present,” a stagnated approach to decision making [two-stage approach] can also be adopted. Going beyond, the Stochastic Programming framework can also be applied to a variety of problems across sectors, such as electricity generation, financial planning, supply chain management, mitigation of climate change, and pollution control. [3] The field has evolved from deterministic linear programming by introducing random variables. [5]
Theory, methodology, and/or algorithmic discussion
Two-Stage Stochastic Linear Program with Fixed Recourse:
Modeling through stochastic programming is often adopted because of its proactive-reactive decision-making feature to address uncertainty. [2] Two-stage stochastic programming (TSP) is helpful when a problem requires the analysis of policy scenarios, however, the associated system information is inherently characterized with uncertainty. [3] In a typical TSP, decision variables of an optimization problem under uncertainty are categorized into two periods. Decisions regarding the first-stage variables need to be made before the realization of the uncertain parameters. The first-stage decision variables are proactive for they hedge against uncertainty and ensure that the solution performs well under all uncertainty realizations. [2] Once the uncertain events have unfolded/realized, it is possible to further design improvements or make operational adjustments through the values of the second-stage variables, also known as recourse, at a given cost. The second-stage variables are reactive in their response to the observed realization of uncertainty. [2] Thus, optimal decisions should be made on data that is available at the time the decision is being made. In such a setup, future observations are not taken into consideration. [6] Two-stage stochastic programming is suited for problems with a hierarchical structure, such as integrated process design, and planning and scheduling. [2]
Methodology
The classical two-stage stochastic linear program with fixed recourse [7] is given below:
Where is a known vector in , is a known vector in . and are known matrices of size and respectively. is known as the recourse matrix.
The first-stage decisions are represented by the vector . Corresponding to are the first-stage vectors and matrices , , and . In the second stage, a number of random events may realize. For a given realization , the second-stage problem data , , and become known. [8]
Algorithm discussion
To solve problems related to Two-Stage Linear Stochastic Programming more effectively, algorithms such as Benders decomposition or the Lagrangean decomposition can be used. Benders decomposition was presented by J.F Benders, and is a decomposition method used widely to solve mixed-integer problems. The algorithm is based on the principle of decomposing the main problem into sub-problems. The master problem is defined with only the first-stage decision variables. Once the first-stage decisions are fixed at an optimal solution of the master problem, thereafter, the subproblems are solved and valid inequalities of x are derived and added to the master problem. On solving the master problem again, the algorithm iterates until the upper and lower bound converge. [3] The Benders master problem is defined as being linear or mixed-integer and having fewer technical constraints, while the sub-problems could be linear or nonlinear in nature. The subproblems’ primary aim is to validate the feasibility of the master problem’s solution. [7] Benders decomposition is also known as L-shaped decomposition because once the first stage variables, , are fixed then the rest of the problem has a structure of a block-diagonal. This structure can be decomposed by scenario and solved independently. [3]
... We assume that the random vector has finite support. Let index possible second stage realizations and let be the corresponding probabilities. With that, we could write down the deterministic equivalent of the stochastic program. This form is created by associating one set of second-stage decisions () to each realization of , i.e., to each realization of , , and . This large-scale deterministic counterpart of the original stochastic program is known as the extensive form:
It is equivalent with the following formulation. The L-shape block structure of this extensive form gives rise to the name, L-shaped method.
L-Shaped Algorithm
Step 0. Set Step 1. Set . Solve the following linear program (master program)
Let be an optimal solution. If there is no constraint (3), set , is defined by the remaining constraints.
Step 2. For solve the following linear program:
where , is the identity matrix. Until for some the optimal value . In this case, let be the associated simplex multipliers and define
to generate a constraint (called a feasibility cut) of type (2). Set , add the constraint to the constraint set (2), and return to Step 1. If for all , , go to Step 3.
Step 3: For solve the linear program
Let be the simplex multipliers associated with the optimal solution of Problem of type (4). Define
Let . If , stop; is an optimal solution. Otherwise, set , add to the constraint set (3) and return to Step 1.
This method approximates using an outer linearization. This approximation is achieved by the master program (1)-(4). It finds a proposal , then sent it to the second stage. Two types of constraints are sequentially added: (i) feasibility cuts (2) determining and (ii) optimality cuts (3), which are linear approximations to on its domain of finiteness.
Numerical Example
To illustrate the algorithm mentioned above, let's take a look at the following numerical example.
where has 0.4 probability to be and 0.6 probability to be .
We should note that in this case, as , and , the second stage is always feasible, which means always hold true. So we could skip the feasibility cuts (step 2) in all iterations. The L-shaped method iterations are shown below:
Iteration 1:
Step 1. With no , the master program is . Result is . Set .
Step 2. No feasibility cut is needed.
Step 3.
- For , solve:
- The solution is
- For , solve:
- The solution is
- Since , we have
- Here, the matrix is the same in these two scenarios, which is . Therefore, we have
- Thus, , we add the cut
Iteration 2:
Step 1. The master program is . Result is .
Step 2. No feasibility cut is needed.
Step 3.
- For , solve:
- The solution is
- For , solve:
- The solution is
- Thus,
- Since , add the cut
Iteration 3:
Step 1. The master program is . Result is .
Step 2. No feasibility cut is needed.
Step 3.
- For , solve:
- The solution is
- For , solve:
- The solution is
- Thus,
- Since , add the cut
Iteration 4:
Step 1. The master program is . Result is .
Step 2. No feasibility cut is needed.
Step 3.
- For , solve:
- The solution is
- For , solve:
- There are multiple optimal solutions. Selecting one of them, we have
- Since , add the cut
Iteration 5:
Step 1. The master program is . Result is .
Step 2. No feasibility cut is needed.
Step 3.
- For , solve:
- The solution is
- For , solve:
- The solution is
- Thus,
- Since , stop.
- is the optimal solution.
Applications
Apart from the process industry, two-stage linear stochastic programming finds application in other fields as well. For instance, in the optimal design of distributed energy systems, there are various uncertainties that need to be considered. Uncertainty is related to aspects such as demand and supply of energy, economic factors like unit investment cost and energy price, and uncertainty related to technical parameters like efficiency. Zhou et al. [9], developed a two-stage stochastic programming model for the optimal design of a distributed energy system with a stage decomposition-based solution strategy. The authors accounted for both demand and supply uncertainty. They used the genetic algorithm on the first stage variables and the Monte Carlo method on the second-stage variables.
Another application of the two-stage linear stochastic programming is in the bike-sharing system (BSS). The system needs to ensure that bikes are available at all stations per the given demand. To ensure this balance, redistribution trucks are used to transfer bikes from one bike surplus station to a bike deficient station. Such a problem is referred to as the bike repositioning problem (BRS) in the aforementioned system. [10] Another challenge related to BRP is the aspect related to the holding cost of the depot. While transferring bikes from one station to another they could get damaged or be lost in the process which could lead to an imbalance between demand and supply in the BSS. As for bikes that cannot be balanced among the stations of the BSS are either placed back at the depot at increase the holding cost of the depot. To address the stated concerns, Tang et al [10], developed a two-stage stochastic program that would capture the uncertainty related to redistribution of demand within the system. In the first stage, before the realization of redistribution demand, a decision regarding routing was made. In the second stage, decisions regarding loading/unloading at each station and depot are made. Holding cost is incorporated into the model, such that the model’s primary objective is to determine the best routes of the repositioning truck and the optimal loading/unloading quantities at each station and depot. The model is framed to minimize the expected total sum of transportation cost, the penalty cost of all stations, and the holding cost of the depot.
Conclusion
From the previous examples, it is evident that two-stage linear stochastic programming finds applicability across many areas, such as the petrochemical, pharmaceutical industry, carbon capture, and energy storage among others. [3] Stochastic programming can primarily be used to model two types of uncertainties: 1) exogenous uncertainty, which is the most widely considered one, and 2) endogenous uncertainty, where realization regarding uncertainty depends on the decision taken. The main challenge, with respect to stochastic programming, is that the type of problems that can be solved is limited. An ‘ideal’ problem would be multi-stage stochastic mixed-integer nonlinear programming under both exogenous and endogenous uncertainty with an arbitrary probability distribution that is stagewise dependent. [3] However, current algorithms, in terms of development and computation resources, are still limited with respect to the ability to solve the ‘ideal’ problem.
References
- ↑ Li, C., & Grossmann, I. E. (2021). A review of stochastic programming methods for optimization of process systems under uncertainty. Frontiers in Chemical Engineering, 2. https://doi.org/10.3389/fceng.2020.622241
- ↑ 2.0 2.1 2.2 2.3 2.4 “Integration of Scheduling and Dynamic Optimization of Batch Processes under Uncertainty: Two-Stage Stochastic Programming Approach and Enhanced Generalized Benders Decomposition Algorithm,”Yunfei Chu and Fengqi You, Industrial & Engineering Chemistry Research 2013 52 (47), 16851-16869 DOI: 10.1021/ie402621t
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 3.6 Li Can, Grossmann Ignacio E., “A Review of Stochastic Programming Methods for Optimization of Process Systems Under Uncertainty,” Frontiers in Chemical Engineering 2021, Vol. 2; DOI: 10.3389/fceng.2020.622241
- ↑ 4.0 4.1 “A Tutorial on Stochastic Programming,” Shapiro, Alexander and Philpott, Andy
- ↑ Powell, W. B. (2019). A unified framework for stochastic optimization. European Journal of Operational Research, 275(3), 795-821.
- ↑ Barik, S.K., Biswal, M.P. & Chakravarty, D. Two-stage stochastic programming problems involving interval discrete random variables. OPSEARCH 49, 280–298 (2012). https://doi.org/10.1007/s12597-012-0078-1
- ↑ 7.0 7.1 Soares, J., Canizes, B., Ghazvini, M. A. F., Vale, Z., & Venayagamoorthy, G. K. (2017). Two-stage stochastic model using benders’ decomposition for large-scale energy resource management in smart grids. IEEE Transactions on Industry Applications, 53(6), 5905-5914.
- ↑ 8.0 8.1 Cite error: Invalid
<ref>
tag; no text was provided for refs named:10
- ↑ Zhe Zhou, Jianyun Zhang, Pei Liu, Zheng Li, Michael C. Georgiadis, Efstratios N. Pistikopoulos, “A two-stage stochastic programming model for the optimal design of distributed energy systems,” Applied Energy, Volume 103, 2013, Pages 135-144, ISSN 0306-2619, https://doi.org/10.1016/j.apenergy.2012.09.019.
- ↑ 10.0 10.1 Qiong Tang, Zhuo Fu, Dezhi Zhang, Hao Guo, Minyi Li, "Addressing the Bike Repositioning Problem in Bike Sharing System: A Two-Stage Stochastic Programming Model", Scientific Programming, vol. 2020, Article ID 8868892, 12 pages, 2020.https://doi.org/10.1155/2020/8868892