Chance-constraint method: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(One intermediate revision by the same user not shown)
Line 141: Line 141:
|}
|}


In order to apply chance-constraint, the parameter in each constraint is replaced with <math>\mu + K\sigma</math> when the constraint is defining an upper limit, and <math> \mu - K\sigma</math> when the constraint is defining a lower limit. The equations applied to the constraints in this problem are therefore:
To apply chance-constraint, the parameter in each constraint is replaced with <math>\mu + K\sigma</math> when the constraint is defining an upper limit, and <math> \mu - K\sigma</math> when the constraint is defining a lower limit:


<math>\quad 7.5x_{1} \leq \mu_{1} + K_{\alpha, 1}\sigma_{1}</math>
<math>\quad 7.5x_{1} \leq \mu_{1} + K_{\alpha, 1}\sigma_{1}</math>
Line 155: Line 155:
Now populate the formula with the values identified in the table earlier:
Now populate the formula with the values identified in the table earlier:


<math>\quad 7.5x_{1} \leq 600 + -0.84\times60</math>
<math>\quad 7.5x_{1} \leq 600 + (-0.84\times60)</math>


<math> \quad 5x_{2} \leq 400 + -0.84\times40</math>
<math> \quad 5x_{2} \leq 400 + (-0.84\times40)</math>


<math> \quad 3x_{1} \geq 300 - -1.28\times30</math>
<math> \quad 3x_{1} \geq 300 - (-1.28\times30)</math>


<math>  \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq 879 + -2.33\times0.5 </math>
<math>  \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq 879 + (-2.33\times0.5) </math>


<math>  \quad 200x_{1} + 200x_{2} + 200x_{3} \leq 6120 + -2.33\times0.5</math>
<math>  \quad 200x_{1} + 200x_{2} + 200x_{3} \leq 6120 + (-2.33\times0.5)</math>


These are then solved to identify the new parameters for the constraints:
These are then solved to identify the new parameters for the constraints:

Latest revision as of 23:43, 15 December 2021

Authors: Anthony Pacheco, Bryan Bausinger, John Maher, Martin Lis, Trevor Spaulding (SYSEN 5800 Fall 2021)

Introduction

The chance-constraint method of optimization programming is a process for working with random parameters within a problem while guaranteeing a certain performance. Uncertain variables in a project lead to questions regarding reliability and risk which make for difficulties in determining the most likely result. In stochastic optimization, expected or nominal values are used for these uncertainties. While there is some risk in making these stochastic optimization decisions, they are usually considered standard since they were already previously accounted for; a known "penalty". In practice, however, these penalties are not always known entities, such as extreme weather events. In these unknown cases, it is still important to plan for these events to occur and make certain decisions around them. A common line of thinking here is to allow for certain unexpected events that violate specific constraints as long as the overall constraint satisfaction is maintained with a given level of probability. In other words, certain levels of feasibility are guaranteed and these are defined as chance-constraints. For example, a distributor may guarantee a certain amount of product will be delivered while taking into account certain chance-constraints such as: weather events, supply chain issues, and/or a flat tire on a delivery truck. A system's performance can be optimized with uncertain constraints via the chance-constraint optimization method by accounting for these constraints and ensuring they satisfy some well-defined reliability values.[1][2]

Theory/Methodology

General Optimization Problem

A general form of an optimization problem is shown below:[3]

The above general form of the constrained optimization problem has multiple real world applications. However, in the real world the parameter is often uncertain and introduces a level of randomness to the optimization problem. Sometimes this uncertainty is bypassed by using the expected value of which can cause issues as the optimal solution to the above optimization problem may have a high chance of being infeasible. Therefore, the random vector needs to be accounted for depending on the optimization application.[3]

The Random Vector

The random vector, , works to factor in the uncertainty of constraints of practical problems and can both be dependent on time or time-independent. The wide ranging possibilities of the random vector play the largest role in the difficult nature of the chance-constrained optimization problem. Though generally speaking, the random vector is considered to have a known distribution.[2] For example, the chance-constraints can take the linear form in the random vector. This is illustrated in both of the constraint defining equations below [1]:

While both instances of these constraint definitions linear, the first instance shows the random vector as distinguished from the decision vector and the second instance has the random vector directly associated with the decision vector. Each of these types of the random vector within the constraint mapping have real-world optimization applications - most commonly in engineering and energy management.[1]

General Chance-constrained Optimization Problem

When factoring in the random vector to the general optimization problem, it now becomes a chance-constrained optimization problem.[3] The chance-constrained optimization method was first created by Charnes, Cooper, and Symmonds in 1958-1959 relating to financial planning optimization. Over the years, improvements have been made to the chance-constrained optimization theory and computation processing methods, most notably by András Prékopa.[2]

A general form of a chance-constraints is shown below:

This chance-constraint formulated above is considered to be a constraint within an optimization problem where this is some objective function that is meant to be minimized. Therefore, the general optimization problem now becomes the following chance-constrained optimization problem:[3]

Compared to a conventional optimization problem, the chance-constrained optimization problem differs due to the left-hand side inequality above because it is rarely provided explicitly. Therefore, there is not a general solution approach for chance-constrained optimization problems. The random and decision vectors in the above equation can guide the path to solution via a linear programming solution, but not always.[1]

Joint vs. Individual Chance-constraints

The chance-constraint above can be written more explicitly as:

This is called a joint chance-constraint (or sometimes a separate chance-constraint,[2]) because the entire probability covers the entirety of the inequality. In simpler terms, the entire constraint is considered as a whole. The pros and cons of working with joint chance-constraints are listed below:[1]

Joint Chance-constraint Pros:

  • Multiple inequalities used to define a system can result in a more robust optimization solution.
  • Helpful in modeling specific constraints of a system to ensure these meet a certain satisfaction level with an anticipated probability.

Joint Chance-constraint Cons:

  • For a large system with multiple constraints, which indicates multiple inequalities, the overall system of inequalities may be difficult to solve.

On the other hand, if each of the constraints in the system need to be considered individually, the chance-constraint becomes a single constraint, which is formulated below:

This type of constraint is known as an individual chance-constraint (and sometimes as a single chance-constraint,[2]). The pros and cons of working with joint chance-constraints are listed below:[1]

Individual Chance-constraint Pros:

  • There is only one inequality needed to model the entire system.
  • Depending on the modeled system, may help to simplify the problem statement.

Individual Chance-constraint Cons:

  • At times, the singular inequality may be computationally difficult or even impossible to solve.
  • Modeling the system using only one inequality may oversimply the problem and lead to a partial, incomplete solution.

The joint and individual chance-constraint models can be used to support one another if properly utilized. For instance, a system may be modeled by being broken down into joint chance-constraints that are then used to calculate the minimum and maximum optimal value bounds for the individual chance-constraint of the overall system. So if is permissible in a joint chance-constraint of a system, then would also be allowable within the individual chance-constraint of that same system.[1]

Caution should be taken when utilizing solely individual chance-constraints in an attempt to overly simplify a problem. It can be appealing to attempt to model an entire system as a single inequality, especially if a certain probability level of satisfaction is warranted over a specific amount of time. However, this modeling may fail in robustness when considering the system throughout the entire timeframe. As a brief example, if a system is required to reach a certain probability level over a lengthy timeframe, it may be appealing to model the system using individual chance-constraints over multiple shorter time slots. The system is then modeled solely over those separate time slots to ensure the certain probability level is satisfied in piecemeal fashion. While this may result in a more easy-to-solve problem, this individual chance-constraint modeling setup may breakdown when expanding over the entire timeframe instead of focusing in on the separate time slots. So in instances such as these, joint chance-constraints may cause a lengthier problem and solution, but will result in a more robust answer.[4]

Numerical Example

This example will show how to convert a linear programming optimization problem to account for joint chance-constraints.

Given the following optimization problem:

The parameters to the right of the inequality in the first 5 constraints has a normal distribution. The table below identifies mean (), standard deviation (), probability of guarantee (), and the standard normal distribution constant of () of each parameter.

Parameter Mean () Standard

Deviation ()

Probability of

Guarantee ()

Normal Distribution

Constant ()

600 60 0.80 -0.84
400 40 0.80 -0.84
300 30 0.90 -1.28
879 0.5 0.99 -2.33
6120 0.5 0.99 -2.33

To apply chance-constraint, the parameter in each constraint is replaced with when the constraint is defining an upper limit, and when the constraint is defining a lower limit:

Now populate the formula with the values identified in the table earlier:

These are then solved to identify the new parameters for the constraints:

Making the new optimization problem which can now be solved using linear programming:

Applications

The chance-constraint optimization approach can be extremely beneficial in any application where there are unknown variables, but with probabilities that can be associated with each variable. Some examples of real-world applications are as follows:

Stock Portfolio

Chance-constrained methods are often used as a risk aversion technique for designing a stock portfolio. Within this method, there is a general predicted value for the final wealth and an associated probability of survival to this method.[5]

Utility Demand (Power/Gas/Water)

Energy creation, particularly in renewable sources, has inherently high variabilities within examples such as solar or wind. It is no longer possible to fully predict energy generation as in the past. Therefore, there is a need for a probabilistic model to provide and distribute energy to all homes to a sufficient certainty. Hu et al. analyzes the economic dispatch of wind energy to meet the dispatch needs in very large instances using the chance-constraint approach.[6]

Transportation

Intermodal Freight Transportation (IFT) is the transportation of a material from one place to another involving at least two medians of transportation (i.e.. sea, air, road). Each of the shipping modalities contains its own stochastic factors such as priority, travel time, and availability. IFT provides the opportunity for faster shipment times, lower costs, improved throughput, and broader market availability. Zhao et al. modeled a two-stage chance-constrained approach with the stages being sea and rail, respectively. A network of available stations (rail), seaports (sea), and hubs (both) is modeled to create the available paths for goods. Their research shows the chance-constrained methods allows for different optimal service conditions. Factoring in the varying probability of shipment duration allows to incorporate late cost considerations and therefore adds in more variability in design.[7]

Water Reservoir Management

Clean, unpolluted water has become a more scarce resource within recent years. To attack this issue, storage resources are designed in major cities such as Beijing, China to distribute water during demand periods. These demand periods vary with uncertainty and thus, the reservoir must be designed to distribute water to a satisfactory probability given all conditions. This model may explore impacts such as the season, pollution level, location of storage, and provide insight for future regulation.[8]

Conclusion

The chance-constraint method allows users to identify an optimized solution even when there are outstanding unknown risks within the process. The methodology incorporates variable volatility into the optimization constraints to attempt to account for this volatility. The user is able to identify what level of uncertainty is acceptable in the solution, which expands the region of available solutions that could be the optimum. However, when increasing the level of uncertainty, the user is also increasing the possible variation in the final answer due to the random vectors. There are many different applications which benefit from the chance-constraint method. One of the most common industrial applications is in logistics planning which can range from power distribution, global transportation networks, and natural resource management.[6][7][8] By recognizing and preparing for unknowns in a problem, the chance-constraint method is a popular optimization process when dealing with uncertainties.

Sources

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 W. Ackooij, R. Zorgati, R. Henrion, and A. Möller, Chance Constrained Programming and Its Applications to Energy Management, 2011.
  2. 2.0 2.1 2.2 2.3 2.4 A. Geletu, M. Klöppel, H. Zhang, and P. Li, "Advances and applications of chance-constrained approaches to systems optimisation under uncertainty," International Journal of Systems Science, vol. 44, pp. 1209-1232, 2013.
  3. 3.0 3.1 3.2 3.3 S. Peng, "Chance constrained problem and its applications," Université Paris Saclay; Xi’an Jiaotong University, 2019
  4. W. Ackooij, R. Henrion, A. Möller, and R. Zorgati, "On probabilistic constraints induced by rectangular sets and multivariate normal distributions," Mathematical Methods of Operations Research, vol. 71(3), pp. 535-549, 2010.
  5. N.H. Agnew, R.A. Agnew, J. Rasmussen, and K. R. Smith, “An Application of Chance-Constrained Programming to Portfolio Selection in a Casualty Insurance Firm,” Management Science, vol. 15, no. 10, pp. 512-520, 1969.
  6. 6.0 6.1 Y. Hu, Y. Li, M. Xu, L. Zhou, and M. Cui, "A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System," Energies, vol. 10(3), 2017.
  7. 7.0 7.1 Y. Zhao, Q. Xue, Z. Cao, and X. Zhang, "A Two-Stage Chance Constrained Approach with Application to Stochastic Intermodal Service Network Design Problems," Journal of Advanced Transportation, 2018.
  8. 8.0 8.1 W. Li, K. Jiao, Z. Bao, Y. Xie, J. Zhen, G. Huang, and L. Fu, "Chance-constrained dynamic programming for multiple water resources allocation management associated with risk-aversion analysis: A case study of Beijing, China," Water, vol. 9(8):596, 2017.