# Difference between revisions of "Chance-constraint method"

Authors: Anthony Pacheco, Bryan Bausinger, John Maher, Martin Lis, Trevor Spaulding (SYSEN 5800 Fall 2021)

## Introduction

The Chance-constraint method of optimization programming is a process for working with random parameters within a problem while guaranteeing a certain performance. Uncertain variables in a project lead to questions regarding reliability and risk which make for difficulties in determining the most likely result. In stochastic optimization, expected or nominal values are used for these uncertainties. While there is some risk in making these stochastic optimization decisions, they are usually considered a "cost of doing business" since they were already previously accounted for; a known "penalty". In practice, however, there are not always known risks that can easily be pre-planned for, such as extreme weather events, "acts of god", etc. In these latter cases, it is still important to plan for these events to occur and make certain decisions around them. A common line of thinking here is to allow for certain unexpected events that violate certain constraints as long as the overall constraint satisfaction with maintained with a given level of probability. In other words, certain levels of feasibility is guaranteed in what are defined as chance constraints. For example, a distributor may guarantee a certain amount of product will be delivered while taking into account certain chance constraints such as weather events, supply chain issues, and/or a flat tire on a delivery truck. Performance of a system can be optimized with uncertain constraints via the Chance-constraint optimization method by accounting for these constraints and ensuring they satisfy some well-defined reliability values.[1][2]

## Theory/Methodology

The chance-constrained optimization method was first created by Charnes, Cooper, and Symmonds in 1958-1959 relating to financial planning optimization. Over the years, improvements have been made to the chance-constrained optimization theory and computation processing methods, most notably by András Prékopa.

A general form of a chance constraint is shown below:

${\displaystyle P(h(x,\xi )\geq 0)\geq p}$

Where x is a decision variable,ξ is a random variable, and p ∈ [0,1].

This chance constraint formulated above is considered to be a constraint within an optimization problem where this is some objective function - usually denoted by f(x) - that is meant to be minimized.

Compared to a conventional optimization problem, the chance-constrained optimization problem differs due to the left-hand side inequality above because it is rarely provided explicitly. Therefore, there is not a general solution approach for chance-constrained optimization problems. The random and decision variables in the above equation can guide the path to solution via a linear programming solution, but not always. The wide ranging possibilities of the random variable play the largest role in the difficult nature of the chance-constrained optimization problem.

The chance constraint above can be written more explicitly as:

${\displaystyle P(h_{j}(x,\xi )\geq 0(j=1,...,m))\geq p}$

This is called a joint chance constraint because the entire probability covers the entirety of the inequality. In simpler terms, the entire constraint is considered as a whole. On the other hand, if each of the constraints in the system need to be considered individually, the chance constraint becomes a single constraint, which is formulated below:

${\displaystyle P(h_{j}(x,\xi )\geq 0)\geq p(j=1,...,m)}$

This latter individual chance constraint generally leads to more inequalities than the joint constraint but is also considered easier to solve since each constraint is accounted for on its own. The joint chance constraint is more difficult to solve numerically because the system has to be considered as an overall whole instead of partwise. [1]

## Numerical Example

text text text

Products
Raws prod1 prod2 c b
raw1 2 3 2 1
raw2 6 3 3 1
relation =
h 180 162 γ 100

min${\displaystyle 2x_{raw1}+3x_{raw2}}$

s.t. ${\displaystyle x_{raw1}+x_{raw2}\leq 100}$,

${\displaystyle 2x_{raw1}+6x_{raw2}\geq 180}$,

${\displaystyle 3x_{raw1}+3x_{raw2}\geq 162}$,

${\displaystyle x_{raw1}\geq 0}$,

${\displaystyle x_{raw2}\geq 0}$.

min${\displaystyle 2x_{raw1}+3x_{raw2}}$

s.t. ${\displaystyle x_{raw1}+x_{raw2}\leq 100}$,

${\displaystyle (2+{\tilde {\eta }}_{1})x_{raw1}+6x_{raw2}\geq 180+{\tilde {\zeta }}_{1}}$,

${\displaystyle 3x_{raw1}+(3.4-{\tilde {\eta }}_{2})x_{raw2}\geq 162+{\tilde {\zeta }}_{2}}$,

${\displaystyle x_{raw1}\geq 0}$,

${\displaystyle x_{raw2}\geq 0}$.

text text text

text text text

## Sources

1. Wim van Ackooij, Riadh Zorgati, René Henrion and Andris Möller (2011). Chance Constrained Programming and Its Applications to Energy Management, Stochastic Optimization - Seeing the Optimal for the Uncertain, Dr. Ioannis Dritsas (Ed.), ISBN: 978-953-307-829-8, InTech, Available from: http://www.intechopen.com/books/stochastic-optimization-seeing-the-optimal-for-the-uncertain/chanceconstrained-programming-and-its-applications-to-energy-management
2. Abebe Geletu , Michael Klöppel , Hui Zhang & Pu Li (2013): Advances and applications of chanceconstrained approaches to systems optimisation under uncertainty, International Journal of Systems Science, 44:7, 1209-1232