Chance-constraint method: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
 
(22 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Authors: Anthony Pacheco, Bryan Bausinger, John Maher, Martin Lis, Trevor Spaulding (SYSEN 5800 Fall 2021)
Authors: Anthony Pacheco, Bryan Bausinger, John Maher, Martin Lis, Trevor Spaulding (SYSEN 5800 Fall 2021)
==Introduction==
==Introduction==
The Chance-constraint method of optimization programming is a process for working with random parameters within a problem while guaranteeing a certain performance. Uncertain variables in a project lead to questions regarding reliability and risk which make for difficulties in determining the most likely result. In stochastic optimization, expected or nominal values are used for these uncertainties. While there is some risk in making these stochastic optimization decisions, they are usually considered a "cost of doing business" since they were already previously accounted for; a known "penalty". In practice, however, there are not always known risks that can easily be pre-planned for, such as extreme weather events, "acts of god", etc. In these latter cases, it is still important to plan for these events to occur and make certain decisions around them. A common line of thinking here is to allow for certain unexpected events that violate certain constraints as long as the overall constraint satisfaction with maintained with a given level of probability. In other words, certain levels of feasibility is guaranteed in what are defined as chance constraints. For example, a distributor may guarantee a certain amount of product will be delivered while taking into account certain chance constraints such as weather events, supply chain issues, and/or a flat tire on a delivery truck. Performance of a system can be optimized with uncertain constraints via the Chance-constraint optimization method by accounting for these constraints and ensuring they satisfy some well-defined reliability values.<ref name=":0">Wim van Ackooij, Riadh Zorgati, René Henrion and Andris Möller (2011). Chance Constrained Programming and Its Applications to Energy Management, Stochastic Optimization - Seeing the Optimal for the Uncertain, Dr. Ioannis Dritsas (Ed.), ISBN: 978-953-307-829-8, InTech, Available from: <nowiki>http://www.intechopen.com/books/stochastic-optimization-seeing-the-optimal-for-the-uncertain/chanceconstrained-programming-and-its-applications-to-energy-management</nowiki></ref><ref>Abebe Geletu , Michael Klöppel , Hui Zhang & Pu Li (2013): Advances and applications of chanceconstrained approaches to systems optimisation under uncertainty, International Journal of Systems Science, 44:7, 1209-1232</ref>
The chance-constraint method of optimization programming is a process for working with random parameters within a problem while guaranteeing a certain performance. Uncertain variables in a project lead to questions regarding reliability and risk which make for difficulties in determining the most likely result. In stochastic optimization, expected or nominal values are used for these uncertainties. While there is some risk in making these stochastic optimization decisions, they are usually considered standard since they were already previously accounted for; a known "penalty". In practice, however, these penalties are not always known entities, such as extreme weather events. In these unknown cases, it is still important to plan for these events to occur and make certain decisions around them. A common line of thinking here is to allow for certain unexpected events that violate specific constraints as long as the overall constraint satisfaction is maintained with a given level of probability. In other words, certain levels of feasibility are guaranteed and these are defined as chance-constraints. For example, a distributor may guarantee a certain amount of product will be delivered while taking into account certain chance-constraints such as: weather events, supply chain issues, and/or a flat tire on a delivery truck. A system's performance can be optimized with uncertain constraints via the chance-constraint optimization method by accounting for these constraints and ensuring they satisfy some well-defined reliability values.<ref name=":0">W. Ackooij, R. Zorgati, R. Henrion, and A. Möller, ''Chance Constrained Programming and Its Applications to Energy Management'', 2011.</ref><ref name=":1">A. Geletu, M. Klöppel, H. Zhang, and P. Li, "Advances and applications of chance-constrained approaches to systems optimisation under uncertainty," ''International Journal of Systems Scie''nce, vol. 44, pp. 1209-1232, 2013.</ref>


==Theory/Methodology==
==Theory/Methodology==
The chance-constrained optimization method was first created by Charnes, Cooper, and Symmonds in 1958-1959 relating to financial planning optimization. Over the years, improvements have been made to the chance-constrained optimization theory and computation processing methods, most notably by András Prékopa.


A general form of a chance constraint is shown below:
=== General Optimization Problem ===
A general form of an optimization problem is shown below:<ref name=":5">S. Peng, "Chance constrained problem and its applications," Université Paris Saclay; Xi’an Jiaotong University, 2019</ref>


<math>P(h(x,\xi) \geq 0) \geq p</math>
<math>min \quad f(x)</math>


Where x is a decision variable,ξ is a random variable, and p ∈ [0,1].
<math>s.t. \quad h_1(x,\xi) \leq 0, ... \, , h_d(x,\xi) \leq 0, x \in X</math>


This chance constraint formulated above is considered to be a constraint within an optimization problem where this is some objective function - usually denoted by f(x) - that is meant to be minimized.
<math>where \, f \, is \, the \, objective \, function, h_1 , ... \, , h_d, are \, inequality \, constraints \,  X \, \in \mathbb{R}^m \, is \, a \, deterministic \, set, x \in X \, is \, the \, decision \, vector, \xi \, is \, a \, parameter \, vector </math>


Compared to a conventional optimization problem, the chance-constrained optimization problem differs due to the left-hand side inequality above because it is rarely provided explicitly. Therefore, there is not a general solution approach for chance-constrained optimization problems. The random and decision variables in the above equation can guide the path to solution via a linear programming solution, but not always. The wide ranging possibilities of the random variable play the largest role in the difficult nature of the chance-constrained optimization problem.
The above general form of the constrained optimization problem has multiple real world applications. However, in the real world the parameter <math>\xi</math> is often uncertain and introduces a level of randomness to the optimization problem. Sometimes this uncertainty is bypassed by using the expected value of <math>\xi</math> which can cause issues as the optimal solution to the above optimization problem may have a high chance of being infeasible. Therefore, the random vector needs to be accounted for depending on the optimization application.<ref name=":5" />


The chance constraint above can be written more explicitly as:
=== The Random Vector ===
The random vector, <math>\xi</math>, works to factor in the uncertainty of constraints of practical problems and can both be dependent on time or time-independent. The wide ranging possibilities of the random vector play the largest role in the difficult nature of the chance-constrained optimization problem. Though generally speaking, the random vector is considered to have a known distribution.<ref name=":1" /> For example, the chance-constraints can take the linear form in the random vector. This is illustrated in both of the constraint defining equations below <ref name=":0" />:


<math>P(h_{j} (x,\xi) \geq 0  (j = 1,...,m)) \geq p</math>
<math>h(x,\xi) = g(x) - A\xi \quad or \quad h(x,\xi) = A(\xi)g(x) - b</math>


This is called a joint chance constraint because the entire probability covers the entirety of the inequality. In simpler terms, the entire constraint is considered as a whole. On the other hand, if each of the constraints in the system need to be considered individually, the chance constraint becomes a single constraint, which is formulated below:
<math>where \, A \, is \, a \, deterministic \, matrix, A(\xi) \, is \, a \, stochastic \, matrix, g \, is \, a \, mapping \, on \, x, and \, b \, is \, a \, vector \, of \, relative \, appropriate \, size, and  \xi \, is \, now \, considered \, the \, random \, vector</math>


<math>P(h_{j} (x,\xi) \geq 0) \geq p  (j = 1,...,m)</math>
While both instances of these constraint definitions linear, the first instance shows the random vector as distinguished from the decision vector and the second instance has the random vector directly associated with the decision vector. Each of these types of the random vector within the constraint mapping have real-world optimization applications - most commonly in engineering and energy management.<ref name=":0" />


This latter individual chance constraint generally leads to more inequalities than the joint constraint but is also considered easier to solve since each constraint is accounted for on its own. The joint chance constraint is more difficult to solve numerically because the system has to be considered as an overall whole instead of partwise. <ref name=":0" />
=== General Chance-constrained Optimization Problem ===
When factoring in the random vector to the general optimization problem, it now becomes a chance-constrained optimization problem.<ref name=":5" /> The chance-constrained optimization method was first created by Charnes, Cooper, and Symmonds in 1958-1959 relating to financial planning optimization. Over the years, improvements have been made to the chance-constrained optimization theory and computation processing methods, most notably by András Prékopa.<ref name=":1" />
 
A general form of a chance-constraints is shown below:
 
<math>\mathbb{P}(h(x,\xi) \geq 0) \geq p</math>
 
<math>where \, h(x,\xi) \geq 0 \, maps \, the \, constraints \, as \, a \, finite \, system \, of \, inequalities, and \, p \, is \, the \, tolerance \, probability \in [0,1]</math>
 
This chance-constraint formulated above is considered to be a constraint within an optimization problem where this is some objective function that is meant to be minimized. Therefore, the general optimization problem now becomes the following chance-constrained optimization problem:<ref name=":5" />
 
<math>min \quad f(x)</math>
 
<math>s.t. \quad \mathbb{P}(h_1(x,\xi) \leq 0, ... \, , h_d(x,\xi) \leq 0) \geq p, x \in X</math>
 
Compared to a conventional optimization problem, the chance-constrained optimization problem differs due to the left-hand side inequality above because it is rarely provided explicitly. Therefore, there is not a general solution approach for chance-constrained optimization problems. The random and decision vectors in the above equation can guide the path to solution via a linear programming solution, but not always.<ref name=":0" />
 
=== Joint vs. Individual Chance-constraints ===
The chance-constraint above can be written more explicitly as:
 
<math>\mathbb{P}(h_{j} (x,\xi) \geq 0  (j = 1,...,m)) \geq p</math>
 
This is called a joint chance-constraint (or sometimes a separate chance-constraint,<ref name=":1" />) because the entire probability covers the entirety of the inequality. In simpler terms, the entire constraint is considered as a whole. The pros and cons of working with joint chance-constraints are listed below:<ref name=":0" />
 
'''Joint Chance-constraint Pros''':
 
* Multiple inequalities used to define a system can result in a more robust optimization solution.
* Helpful in modeling specific constraints of a system to ensure these meet a certain satisfaction level with an anticipated probability.
 
'''Joint Chance-constraint Cons''':
 
* For a large system with multiple constraints, which indicates multiple inequalities, the overall system of inequalities may be difficult to solve.
 
On the other hand, if each of the constraints in the system need to be considered individually, the chance-constraint becomes a single constraint, which is formulated below:
 
<math>\mathbb{P}(h_{j} (x,\xi) \geq 0) \geq p  (j = 1,...,m)</math>
 
This type of constraint is known as an individual chance-constraint (and sometimes as a single chance-constraint,<ref name=":1" />). The pros and cons of working with joint chance-constraints are listed below:<ref name=":0" />
 
'''Individual Chance-constraint Pros''':
 
* There is only one inequality needed to model the entire system.
* Depending on the modeled system, may help to simplify the problem statement.
 
'''Individual Chance-constraint Cons''':
 
* At times, the singular inequality may be computationally difficult or even impossible to solve.
* Modeling the system using only one inequality may oversimply the problem and lead to a partial, incomplete solution.
 
The joint and individual chance-constraint models can be used to support one another if properly utilized. For instance, a system may be modeled by being broken down into joint chance-constraints that are then used to calculate the minimum and maximum optimal value bounds for the individual chance-constraint of the overall system. So if <math>x</math> is permissible in a joint chance-constraint of a system, then <math>x</math> would also be allowable within the individual chance-constraint of that same system.<ref name=":0" />
 
Caution should be taken when utilizing solely individual chance-constraints in an attempt to overly simplify a problem. It can be appealing to attempt to model an entire system as a single inequality, especially if a certain probability level of satisfaction is warranted over a specific amount of time. However, this modeling may fail in robustness when considering the system throughout the entire timeframe. As a brief example, if a system is required to reach a certain probability level over a lengthy timeframe, it may be appealing to model the system using individual chance-constraints over multiple shorter time slots. The system is then modeled solely over those separate time slots to ensure the certain probability level is satisfied in piecemeal fashion. While this may result in a more easy-to-solve problem, this individual chance-constraint modeling setup may breakdown when expanding over the entire timeframe instead of focusing in on the separate time slots. So in instances such as these, joint chance-constraints may cause a lengthier problem and solution, but will result in a more robust answer.<ref>W. Ackooij, R. Henrion, A. Möller, and R. Zorgati, "On probabilistic constraints induced by rectangular sets and multivariate normal distributions," ''Mathematical Methods of Operations Research'', vol. 71(3), pp. 535-549, 2010.</ref>


==Numerical Example==
==Numerical Example==
text text text
This example will show how to convert a linear programming optimization problem to account for joint chance-constraints.


Given the following optimization problem:


<math>max \quad P = 3x_{1} + x_{2} + 5x_{3}</math>
<math>s.t. \quad 7.5x_{1} \leq 600</math>
<math> \qquad \quad 5x_{2} \leq 400</math>
<math> \qquad \quad 3x_{1} \geq 300</math>
<math>  \qquad \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq 879 </math>
<math>  \qquad \quad 200x_{1} + 200x_{2} + 200x_{3} \leq 6120</math>
<math>  \qquad \quad x_{1}, x_{2}, x_{3} \geq 0</math>
The parameters to the right of the inequality in the first 5 constraints has a normal distribution. The table below identifies mean (<math> \mu</math>), standard deviation (<math> \sigma</math>), probability of guarantee (<math> \alpha</math>), and the standard normal distribution constant of <math> 1-\alpha</math> (<math> K_{\alpha}</math>) of each parameter.
{| class="wikitable"
{| class="wikitable"
!
|+
! colspan="2" |Products
!Parameter
! colspan="2" |
!Mean (<math>\mu</math>)
!Standard
Deviation (<math> \sigma</math>)
!Probability of
 
Guarantee (<math> \alpha</math>)
!Normal Distribution
Constant (<math> K_{\alpha}</math>)
|-
|-
|Raws
|<math>c_{1}</math>
|''prod''1
|600
|''prod''2
|60
|c
|0.80
|b
| -0.84
|-
|-
|''raw''1
|<math>c_{2}</math>
|2
|400
|3
|40
|2
|0.80
|1
| -0.84
|-
|-
|''raw''2
|<math>c_{3}</math>
|6
|300
|3
|30
|3
|0.90
|1
| -1.28
|-
|-
|relation
|<math>c_{4}</math>
|
|879
|
|0.5
|=
|0.99
|
| -2.33
|-
|-
|''h''
|<math>c_{5}</math>
|180
|6120
|162
|0.5
|γ
|0.99
|100
| -2.33
|}
|}
min<math>2x_{raw1} + 3x_{raw2}</math>


s.t. <math>x_{raw1} + x_{raw2} \leq 100</math>,
To apply chance-constraint, the parameter in each constraint is replaced with <math>\mu + K\sigma</math> when the constraint is defining an upper limit, and <math> \mu - K\sigma</math> when the constraint is defining a lower limit:
 
<math>\quad 7.5x_{1} \leq \mu_{1} + K_{\alpha, 1}\sigma_{1}</math>
 
<math> \quad 5x_{2} \leq \mu_{2} + K_{\alpha, 2}\sigma_{2}</math>
 
<math> \quad 3x_{1} \geq \mu_{3} - K_{\alpha, 3}\sigma_{3}</math>
 
<math>  \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq \mu_{4} + K_{\alpha, 4}\sigma_{4} </math>
 
<math>  \quad 200x_{1} + 200x_{2} + 200x_{3} \leq \mu_{5} + K_{\alpha, 5}\sigma_{5}</math>
 
Now populate the formula with the values identified in the table earlier:
 
<math>\quad 7.5x_{1} \leq 600 + (-0.84\times60)</math>
 
<math> \quad 5x_{2} \leq 400 + (-0.84\times40)</math>
 
<math> \quad 3x_{1} \geq 300 - (-1.28\times30)</math>
 
<math>  \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq 879 + (-2.33\times0.5) </math>
 
<math>  \quad 200x_{1} + 200x_{2} + 200x_{3} \leq 6120 + (-2.33\times0.5)</math>
 
These are then solved to identify the new parameters for the constraints:


<math>2x_{raw1} + 6x_{raw2} \geq 180</math>,
<math>\quad 7.5x_{1} \leq 549.6</math>


<math>3x_{raw1} + 3x_{raw2} \geq 162</math>,
<math> \quad 5x_{2} \leq 366.4</math>


<math>x_{raw1} \geq 0</math>,
<math> \quad 3x_{1} \geq 338.4</math>


<math>x_{raw2} \geq 0</math>.
<math> \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq 877.84 </math>


<math>  \quad 200x_{1} + 200x_{2} + 200x_{3} \leq 6118.84</math>


Making the new optimization problem which can now be solved using linear programming:


min<math>2x_{raw1} + 3x_{raw2}</math>
<math>max \quad P = 3x_{1} + x_{2} + 5x_{3}</math>


s.t. <math>x_{raw1} + x_{raw2} \leq 100</math>,
<math>s.t. \quad 7.5x_{1} \leq 549.6</math>


<math>(2 + \tilde{\eta}_{1})x_{raw1} + 6x_{raw2} \geq 180 + \tilde{\zeta}_{1}</math>,
<math> \qquad \quad 5x_{2} \leq 366.4</math>


<math>3x_{raw1} + (3.4 - \tilde{\eta}_{2})x_{raw2} \geq 162 + \tilde{\zeta}_{2}</math>,
<math> \qquad \quad 3x_{1} \geq 338.4</math>


<math>x_{raw1} \geq 0</math>,
<math> \qquad \quad 7.5x_{1} + 5x_{2} + 3x_{3} \leq 877.84 </math>


<math>x_{raw2} \geq 0</math>.
<math> \qquad \quad 200x_{1} + 200x_{2} + 200x_{3} \leq 6118.84</math>


<math>  \qquad \quad x_{1}, x_{2}, x_{3} \geq 0</math>
==Applications==
==Applications==
text text text
The chance-constraint optimization approach can be extremely beneficial in any application where there are unknown variables, but with probabilities that can be associated with each variable. Some examples of real-world applications are as follows:
 
=== Stock Portfolio ===
Chance-constrained methods are often used as a risk aversion technique for designing a stock portfolio. Within this method, there is a general predicted value for the final wealth and an associated probability of survival to this method.<ref>N.H. Agnew, R.A. Agnew, J. Rasmussen, and K. R. Smith, “An Application of Chance-Constrained Programming to Portfolio Selection in a Casualty Insurance Firm,” ''Management Science'', vol. 15, no. 10, pp. 512-520, 1969. </ref>
 
=== Utility Demand (Power/Gas/Water) ===
Energy creation, particularly in renewable sources, has inherently high variabilities within examples such as solar or wind. It is no longer possible to fully predict energy generation as in the past. Therefore, there is a need for a probabilistic model to provide and distribute energy to all homes to a sufficient certainty. Hu et al. analyzes the economic dispatch of wind energy to meet the dispatch needs in very large instances using the chance-constraint approach.<ref name=":2">Y. Hu, Y. Li, M. Xu, L. Zhou, and M. Cui, "A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System," ''Energies'', vol. 10(3), 2017. </ref>
 
=== Transportation ===
Intermodal Freight Transportation (IFT) is the transportation of a material from one place to another involving at least two medians of transportation (i.e.. sea, air, road). Each of the shipping modalities contains its own stochastic factors such as priority, travel time, and availability. IFT provides the opportunity for faster shipment times, lower costs, improved throughput, and broader market availability. Zhao et al. modeled a two-stage chance-constrained approach with the stages being sea and rail, respectively. A network of available stations (rail), seaports (sea), and hubs (both) is modeled to create the available paths for goods. Their research shows the chance-constrained methods allows for different optimal service conditions. Factoring in the varying probability of shipment duration allows to incorporate late cost considerations and therefore adds in more variability in design.<ref name=":3">Y. Zhao, Q. Xue, Z. Cao, and X. Zhang, "A Two-Stage Chance Constrained Approach with Application to Stochastic Intermodal Service Network Design Problems," ''Journal of Advanced Transportation'', 2018. </ref>
 
=== Water Reservoir Management ===
Clean, unpolluted water has become a more scarce resource within recent years. To attack this issue, storage resources are designed in major cities such as Beijing, China to distribute water during demand periods. These demand periods vary with uncertainty and thus, the reservoir must be designed to distribute water to a satisfactory probability given all conditions. This model may explore impacts such as the season, pollution level, location of storage, and provide insight for future regulation.<ref name=":4">W. Li, K. Jiao, Z. Bao, Y. Xie, J. Zhen, G. Huang, and L. Fu, "Chance-constrained dynamic programming for multiple water resources allocation management associated with risk-aversion analysis: A case study of Beijing, China," ''Water'', vol. 9(8):596, 2017. </ref>
 
==Conclusion==
==Conclusion==
text text text
The chance-constraint method allows users to identify an optimized solution even when there are outstanding unknown risks within the process. The methodology incorporates variable volatility into the optimization constraints to attempt to account for this volatility. The user is able to identify what level of uncertainty is acceptable in the solution, which expands the region of available solutions that could be the optimum. However, when increasing the level of uncertainty, the user is also increasing the possible variation in the final answer due to the random vectors. There are many different applications which benefit from the chance-constraint method. One of the most common industrial applications is in logistics planning which can range from power distribution, global transportation networks, and natural resource management.<ref name=":2" /><ref name=":3" /><ref name=":4" /> By recognizing and preparing for unknowns in a problem, the chance-constraint method is a popular optimization process when dealing with uncertainties.
 
==Sources==
==Sources==
<references />
<references />

Latest revision as of 23:43, 15 December 2021

Authors: Anthony Pacheco, Bryan Bausinger, John Maher, Martin Lis, Trevor Spaulding (SYSEN 5800 Fall 2021)

Introduction

The chance-constraint method of optimization programming is a process for working with random parameters within a problem while guaranteeing a certain performance. Uncertain variables in a project lead to questions regarding reliability and risk which make for difficulties in determining the most likely result. In stochastic optimization, expected or nominal values are used for these uncertainties. While there is some risk in making these stochastic optimization decisions, they are usually considered standard since they were already previously accounted for; a known "penalty". In practice, however, these penalties are not always known entities, such as extreme weather events. In these unknown cases, it is still important to plan for these events to occur and make certain decisions around them. A common line of thinking here is to allow for certain unexpected events that violate specific constraints as long as the overall constraint satisfaction is maintained with a given level of probability. In other words, certain levels of feasibility are guaranteed and these are defined as chance-constraints. For example, a distributor may guarantee a certain amount of product will be delivered while taking into account certain chance-constraints such as: weather events, supply chain issues, and/or a flat tire on a delivery truck. A system's performance can be optimized with uncertain constraints via the chance-constraint optimization method by accounting for these constraints and ensuring they satisfy some well-defined reliability values.[1][2]

Theory/Methodology

General Optimization Problem

A general form of an optimization problem is shown below:[3]

The above general form of the constrained optimization problem has multiple real world applications. However, in the real world the parameter is often uncertain and introduces a level of randomness to the optimization problem. Sometimes this uncertainty is bypassed by using the expected value of which can cause issues as the optimal solution to the above optimization problem may have a high chance of being infeasible. Therefore, the random vector needs to be accounted for depending on the optimization application.[3]

The Random Vector

The random vector, , works to factor in the uncertainty of constraints of practical problems and can both be dependent on time or time-independent. The wide ranging possibilities of the random vector play the largest role in the difficult nature of the chance-constrained optimization problem. Though generally speaking, the random vector is considered to have a known distribution.[2] For example, the chance-constraints can take the linear form in the random vector. This is illustrated in both of the constraint defining equations below [1]:

While both instances of these constraint definitions linear, the first instance shows the random vector as distinguished from the decision vector and the second instance has the random vector directly associated with the decision vector. Each of these types of the random vector within the constraint mapping have real-world optimization applications - most commonly in engineering and energy management.[1]

General Chance-constrained Optimization Problem

When factoring in the random vector to the general optimization problem, it now becomes a chance-constrained optimization problem.[3] The chance-constrained optimization method was first created by Charnes, Cooper, and Symmonds in 1958-1959 relating to financial planning optimization. Over the years, improvements have been made to the chance-constrained optimization theory and computation processing methods, most notably by András Prékopa.[2]

A general form of a chance-constraints is shown below:

This chance-constraint formulated above is considered to be a constraint within an optimization problem where this is some objective function that is meant to be minimized. Therefore, the general optimization problem now becomes the following chance-constrained optimization problem:[3]

Compared to a conventional optimization problem, the chance-constrained optimization problem differs due to the left-hand side inequality above because it is rarely provided explicitly. Therefore, there is not a general solution approach for chance-constrained optimization problems. The random and decision vectors in the above equation can guide the path to solution via a linear programming solution, but not always.[1]

Joint vs. Individual Chance-constraints

The chance-constraint above can be written more explicitly as:

This is called a joint chance-constraint (or sometimes a separate chance-constraint,[2]) because the entire probability covers the entirety of the inequality. In simpler terms, the entire constraint is considered as a whole. The pros and cons of working with joint chance-constraints are listed below:[1]

Joint Chance-constraint Pros:

  • Multiple inequalities used to define a system can result in a more robust optimization solution.
  • Helpful in modeling specific constraints of a system to ensure these meet a certain satisfaction level with an anticipated probability.

Joint Chance-constraint Cons:

  • For a large system with multiple constraints, which indicates multiple inequalities, the overall system of inequalities may be difficult to solve.

On the other hand, if each of the constraints in the system need to be considered individually, the chance-constraint becomes a single constraint, which is formulated below:

This type of constraint is known as an individual chance-constraint (and sometimes as a single chance-constraint,[2]). The pros and cons of working with joint chance-constraints are listed below:[1]

Individual Chance-constraint Pros:

  • There is only one inequality needed to model the entire system.
  • Depending on the modeled system, may help to simplify the problem statement.

Individual Chance-constraint Cons:

  • At times, the singular inequality may be computationally difficult or even impossible to solve.
  • Modeling the system using only one inequality may oversimply the problem and lead to a partial, incomplete solution.

The joint and individual chance-constraint models can be used to support one another if properly utilized. For instance, a system may be modeled by being broken down into joint chance-constraints that are then used to calculate the minimum and maximum optimal value bounds for the individual chance-constraint of the overall system. So if is permissible in a joint chance-constraint of a system, then would also be allowable within the individual chance-constraint of that same system.[1]

Caution should be taken when utilizing solely individual chance-constraints in an attempt to overly simplify a problem. It can be appealing to attempt to model an entire system as a single inequality, especially if a certain probability level of satisfaction is warranted over a specific amount of time. However, this modeling may fail in robustness when considering the system throughout the entire timeframe. As a brief example, if a system is required to reach a certain probability level over a lengthy timeframe, it may be appealing to model the system using individual chance-constraints over multiple shorter time slots. The system is then modeled solely over those separate time slots to ensure the certain probability level is satisfied in piecemeal fashion. While this may result in a more easy-to-solve problem, this individual chance-constraint modeling setup may breakdown when expanding over the entire timeframe instead of focusing in on the separate time slots. So in instances such as these, joint chance-constraints may cause a lengthier problem and solution, but will result in a more robust answer.[4]

Numerical Example

This example will show how to convert a linear programming optimization problem to account for joint chance-constraints.

Given the following optimization problem:

The parameters to the right of the inequality in the first 5 constraints has a normal distribution. The table below identifies mean (), standard deviation (), probability of guarantee (), and the standard normal distribution constant of () of each parameter.

Parameter Mean () Standard

Deviation ()

Probability of

Guarantee ()

Normal Distribution

Constant ()

600 60 0.80 -0.84
400 40 0.80 -0.84
300 30 0.90 -1.28
879 0.5 0.99 -2.33
6120 0.5 0.99 -2.33

To apply chance-constraint, the parameter in each constraint is replaced with when the constraint is defining an upper limit, and when the constraint is defining a lower limit:

Now populate the formula with the values identified in the table earlier:

These are then solved to identify the new parameters for the constraints:

Making the new optimization problem which can now be solved using linear programming:

Applications

The chance-constraint optimization approach can be extremely beneficial in any application where there are unknown variables, but with probabilities that can be associated with each variable. Some examples of real-world applications are as follows:

Stock Portfolio

Chance-constrained methods are often used as a risk aversion technique for designing a stock portfolio. Within this method, there is a general predicted value for the final wealth and an associated probability of survival to this method.[5]

Utility Demand (Power/Gas/Water)

Energy creation, particularly in renewable sources, has inherently high variabilities within examples such as solar or wind. It is no longer possible to fully predict energy generation as in the past. Therefore, there is a need for a probabilistic model to provide and distribute energy to all homes to a sufficient certainty. Hu et al. analyzes the economic dispatch of wind energy to meet the dispatch needs in very large instances using the chance-constraint approach.[6]

Transportation

Intermodal Freight Transportation (IFT) is the transportation of a material from one place to another involving at least two medians of transportation (i.e.. sea, air, road). Each of the shipping modalities contains its own stochastic factors such as priority, travel time, and availability. IFT provides the opportunity for faster shipment times, lower costs, improved throughput, and broader market availability. Zhao et al. modeled a two-stage chance-constrained approach with the stages being sea and rail, respectively. A network of available stations (rail), seaports (sea), and hubs (both) is modeled to create the available paths for goods. Their research shows the chance-constrained methods allows for different optimal service conditions. Factoring in the varying probability of shipment duration allows to incorporate late cost considerations and therefore adds in more variability in design.[7]

Water Reservoir Management

Clean, unpolluted water has become a more scarce resource within recent years. To attack this issue, storage resources are designed in major cities such as Beijing, China to distribute water during demand periods. These demand periods vary with uncertainty and thus, the reservoir must be designed to distribute water to a satisfactory probability given all conditions. This model may explore impacts such as the season, pollution level, location of storage, and provide insight for future regulation.[8]

Conclusion

The chance-constraint method allows users to identify an optimized solution even when there are outstanding unknown risks within the process. The methodology incorporates variable volatility into the optimization constraints to attempt to account for this volatility. The user is able to identify what level of uncertainty is acceptable in the solution, which expands the region of available solutions that could be the optimum. However, when increasing the level of uncertainty, the user is also increasing the possible variation in the final answer due to the random vectors. There are many different applications which benefit from the chance-constraint method. One of the most common industrial applications is in logistics planning which can range from power distribution, global transportation networks, and natural resource management.[6][7][8] By recognizing and preparing for unknowns in a problem, the chance-constraint method is a popular optimization process when dealing with uncertainties.

Sources

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 W. Ackooij, R. Zorgati, R. Henrion, and A. Möller, Chance Constrained Programming and Its Applications to Energy Management, 2011.
  2. 2.0 2.1 2.2 2.3 2.4 A. Geletu, M. Klöppel, H. Zhang, and P. Li, "Advances and applications of chance-constrained approaches to systems optimisation under uncertainty," International Journal of Systems Science, vol. 44, pp. 1209-1232, 2013.
  3. 3.0 3.1 3.2 3.3 S. Peng, "Chance constrained problem and its applications," Université Paris Saclay; Xi’an Jiaotong University, 2019
  4. W. Ackooij, R. Henrion, A. Möller, and R. Zorgati, "On probabilistic constraints induced by rectangular sets and multivariate normal distributions," Mathematical Methods of Operations Research, vol. 71(3), pp. 535-549, 2010.
  5. N.H. Agnew, R.A. Agnew, J. Rasmussen, and K. R. Smith, “An Application of Chance-Constrained Programming to Portfolio Selection in a Casualty Insurance Firm,” Management Science, vol. 15, no. 10, pp. 512-520, 1969.
  6. 6.0 6.1 Y. Hu, Y. Li, M. Xu, L. Zhou, and M. Cui, "A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System," Energies, vol. 10(3), 2017.
  7. 7.0 7.1 Y. Zhao, Q. Xue, Z. Cao, and X. Zhang, "A Two-Stage Chance Constrained Approach with Application to Stochastic Intermodal Service Network Design Problems," Journal of Advanced Transportation, 2018.
  8. 8.0 8.1 W. Li, K. Jiao, Z. Bao, Y. Xie, J. Zhen, G. Huang, and L. Fu, "Chance-constrained dynamic programming for multiple water resources allocation management associated with risk-aversion analysis: A case study of Beijing, China," Water, vol. 9(8):596, 2017.