# Difference between revisions of "Chance-constraint method"

Authors: Anthony Pacheco, Bryan Bausinger, John Maher, Martin Lis, Trevor Spaulding (SYSEN 5800 Fall 2021)

## Introduction

The Chance-constraint method of optimization programming is a process for working with random parameters within a problem while guaranteeing a certain performance. Uncertain variables in a project lead to questions regarding reliability and risk which make for difficulties in determining the most likely result. In stochastic optimization, expected or nominal values are used for these uncertainties. While there is some risk in making these stochastic optimization decisions, they are usually considered a "cost of doing business" since they were already previously accounted for; a known "penalty". In practice, however, there are not always known risks that can easily be pre-planned for, such as extreme weather events, "acts of god", etc. In these latter cases, it is still important to plan for these events to occur and make certain decisions around them. A common line of thinking here is to allow for certain unexpected events that violate certain constraints as long as the overall constraint satisfaction with maintained with a given level of probability. In other words, certain levels of feasibility is guaranteed in what are defined as chance constraints. For example, a distributor may guarantee a certain amount of product will be delivered while taking into account certain chance constraints such as weather events, supply chain issues, and/or a flat tire on a delivery truck. Performance of a system can be optimized with uncertain constraints via the Chance-constraint optimization method by accounting for these constraints and ensuring they satisfy some well-defined reliability values.

## Theory/Methodology

The chance-constrained optimization method was first created by Charnes, Cooper, and Symmonds in 1958-1959 relating to financial planning optimization. Over the years, improvements have been made to the chance-constrained optimization theory and computation processing methods, most notably by András Prékopa.

A general form of a chance constraint is shown below:

$P(h(x,\xi )\geq 0)\geq p$ Where x is a decision variable,ξ is a random variable, and p ∈ [0,1].

This chance constraint formulated above is considered to be a constraint within an optimization problem where this is some objective function - usually denoted by f(x) - that is meant to be minimized.

Compared to a conventional optimization problem, the chance-constrained optimization problem differs due to the left-hand side inequality above because it is rarely provided explicitly. Therefore, there is not a general solution approach for chance-constrained optimization problems. The random and decision variables in the above equation can guide the path to solution via a linear programming solution, but not always. The wide ranging possibilities of the random variable play the largest role in the difficult nature of the chance-constrained optimization problem.

The chance constraint above can be written more explicitly as:

$P(h_{j}(x,\xi )\geq 0(j=1,...,m))\geq p$ This is called a joint chance constraint because the entire probability covers the entirety of the inequality. In simpler terms, the entire constraint is considered as a whole. On the other hand, if each of the constraints in the system need to be considered individually, the chance constraint becomes a single constraint, which is formulated below:

$P(h_{j}(x,\xi )\geq 0)\geq p(j=1,...,m)$ This latter individual chance constraint generally leads to more inequalities than the joint constraint but is also considered easier to solve since each constraint is accounted for on its own. The joint chance constraint is more difficult to solve numerically because the system has to be considered as an overall whole instead of partwise. 

## Numerical Example

To explain chance constraints in a simplified example, consider a refinery that is able to take two raw materials ($raw_{1}$ , $raw_{2}$ ) and simultaneously produce two different goods (prod1, prod2). To understand the production costs ($\gamma$ ), we must relate the generated outputs with the amount of raw material it took to produce ($c_{raw1}$ , $c_{raw2}$ ). Additionally, we need to understand the demand for each product generated ($h_{prod1}$ , $h_{prod2}$ ) and the production capacity ($h_{total}$ ) based on the constraints of the refinery. Below is a condensed table with the production constraints and costs as well as a formulation of these constraints as a linear program.

Products
Raws prod1 prod2 c b
raw1 2 3 2 1
raw2 6 3 3 1
relation =
h 180 162 γ 100

min: $2x_{raw1}+3x_{raw2}$ s.t.: $x_{raw1}+x_{raw2}\leq 100$ ,

$2x_{raw1}+6x_{raw2}\geq 180$ ,

$3x_{raw1}+3x_{raw2}\geq 162$ ,

$x_{raw1}\geq 0$ ,

$x_{raw2}\geq 0$ .

Graphing the objective function and constraints allows for us to immediately see the feasible region where all constraints are satisfied.

[Chart to be added]

Visually analyzing the feasible region, we can conclude that the following is the optimal solution to the problem:

${\hat {x}}_{raw1}=36$ , ${\hat {x}}_{raw2}=18$ , $\gamma ({\hat {x}})=126$ This is an oversimplification of this problem as all the variables are assumed to be fixed. For example, although the unit cost of the raw materials may be understood, these can be affected by natural disasters (fires or hurricanes destroying materials, supply chains disrupting supply/demand), politics (sanctions) and other factors. Although these macro events may have little impact on immediate material costs, it lowers the probability that your assumption of a fixed price will be accurate. More impactfully, demand of the various products is more difficult to predict and therefore has higher uncertainty. Additionally, the production capability at any given time is not fixed. For example, there may be issues with machines (repairs, calibrations etc..) or even the machine operators (illness, vacation resulting in not enough operators to run all machines at full capacity). To make this problem more realistic, let's assume:

• Productivities of prod1, prod2 vary randomly
• Weekly demands $h_{prod1}$ , $h_{prod2}$ vary randomly
• Before manufacture, weekly production plan is fixed and actual production of prod1, prod2 can only be measured during production
• Clients expect actual demand to be satisfied

The more simplistic linear program defined above now turns into a more complex stochastic linear program:

min$2x_{raw1}+3x_{raw2}$ s.t. $x_{raw1}+x_{raw2}\leq 100$ ,

$(2+{\tilde {\eta }}_{1})x_{raw1}+6x_{raw2}\geq 180+{\tilde {\zeta }}_{1}$ ,

$3x_{raw1}+(3.4-{\tilde {\eta }}_{2})x_{raw2}\geq 162+{\tilde {\zeta }}_{2}$ ,

$x_{raw1}\geq 0$ ,

$x_{raw2}\geq 0$ .

Where ${\tilde {\zeta _{j}}}$ is a random variable modelled using normal distributions and ${\frac {\tilde {\eta _{1}}}{\tilde {\eta _{2}}}}$ are uniform/exponential distributions. Adjusting the value of ${\frac {\tilde {\eta _{1}}}{\tilde {\eta _{2}}}}$ based on a desired confidence interval we can see the effect that this has on the feasible region.

[Chart to be added]

The difficulty in implementing a problem like this is that a production plan must be determined based off of the uncertainty of the situation, and how much risk one is willing to accept. There does exist a “safe” solution, in which the feasible region is determined based on the requirements to satisfy every constraint, even at different probabilities.

[Chart to be added]

Although this may be the safest solution, it may not be optimal as you may procure not enough raw materials to meet demand that exceeds your “safe” solution. Additional complexities and optimizations can be considered to improve the optimized solution but will be omitted for this example.

text text text

## Conclusion

The chance constraint method allows users to identify an optimized solution even when there are outstanding unknown risks within the process. The methodology incorporates variable volatility into the optimization constraints to attempt to account for this volatility. The user is able to identify what level of uncertainty is acceptable in the solution, which expands the region of available solutions that could be the optimum. However, when increasing the level of uncertainty, the user is also increasing the possible variation in the final answer due to the random variables. Since each specific problem can have different specific and magnitudes of risk and variability, there is no universal approach to solving problems with this problem which makes each application unique.

There are many different applications which benefit from the chance-constraint method. One of the most common industrial applications is in logistics planning which can range from power distribution, global transportation networks, and natural resource management. The need to account for unknown risk in these fields can be easily shown in the current challenges that are being faced with the increasing variations in weather impacting rainfall, and the COVID-19 pandemic changing travel requirements and consumer spending.

## Sources

1. Wim van Ackooij, Riadh Zorgati, René Henrion and Andris Möller (2011). Chance Constrained Programming and Its Applications to Energy Management, Stochastic Optimization - Seeing the Optimal for the Uncertain, Dr. Ioannis Dritsas (Ed.), ISBN: 978-953-307-829-8, InTech, Available from: http://www.intechopen.com/books/stochastic-optimization-seeing-the-optimal-for-the-uncertain/chanceconstrained-programming-and-its-applications-to-energy-management
2. Abebe Geletu , Michael Klöppel , Hui Zhang & Pu Li (2013): Advances and applications of chanceconstrained approaches to systems optimisation under uncertainty, International Journal of Systems Science, 44:7, 1209-1232
3. Kall, P., & Wallace, S. W. (1994, January). (PDF) stochastic programming - researchgate. Stochastic Programming. Retrieved November 29, 2021, from https://www.researchgate.net/publication/200035327_Stochastic_Programming.