https://optimization.cbe.cornell.edu/api.php?action=feedcontributions&user=Wc593&feedformat=atomCornell University Computational Optimization Open Textbook - Optimization Wiki - User contributions [en]2022-10-06T06:40:52ZUser contributionsMediaWiki 1.35.0https://optimization.cbe.cornell.edu/index.php?title=Adam&diff=2743Adam2020-12-21T11:43:09Z<p>Wc593: </p>
<hr />
<div>Author: Nicholas Kincaid (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Adam <ref name="adam"> Kingma, Diederik P., and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015, pp. 1–15.</ref> is a variant of gradient descent that has become widely popular in the machine learning community. Presented in 2015, the Adam algorithm is often recommended as the default algorithm for training neural networks as it has shown improved performance over other variants of gradient descent algorithms for a wide range of problems. Adam's name is derived from adaptive moment estimation because uses estimates of the first and second moments of the gradient to perform updates, which can be seen as incorporating gradient descent with momentum (the first-order moment) and [https://optimization.cbe.cornell.edu/index.php?title=RMSProp RMSProp] algorithm<ref>Tieleman, Tijmen, and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, 2012.</ref> (the second-order moment).<br />
<br />
== Background ==<br />
=== Batch Gradient Descent ===<br />
In standard batch gradient descent, the parameters, <math>\theta</math>, of the objective function <math>f(\theta)</math>, are updated based on the gradient of <math>f</math> with respect to <br />
<math>\theta</math> for the entire training dataset, as<br />
<br />
<math> g_t =\nabla_{\theta_{t-1}} f \big(\theta_{t-1} \big) </math> <br/><br />
<math> \theta_t = \theta_{t-1} - \alpha g_t , </math> <br/><br />
<br />
where <math>\alpha</math> is defined as the learning rate and is a hyper-parameter of the optimization algorithm, and <math>t</math> is the iteration number. Key challenges of the standard gradient descent method are the tendency to get stuck in local minima and/or saddle points of the objective function, as well as choosing a proper learning rate, <math>\alpha</math>, which can lead to poor convergence.<ref>Ruder, Sebastian. An Overview of Gradient Descent Optimization Algorithms, 2016, pp. 1–14, http://arxiv.org/abs/1609.04747.</ref><br />
<br />
=== Stochastic Gradient Descent ===<br />
Another variant of gradient descent is [https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent stochastic gradient descent (SGD)], the gradient is computed and parameters are updated as in equation 1, but for each training sample in the training set. <br />
=== Mini-Batch Gradient Descent ===<br />
In between batch gradient descent and stochastic gradient descent, mini-batch gradient descent computes parameters updates on the gradient computed from a subset of the training set, where the size of the subset is often referred to as the batch size.<br />
<br />
== Adam Algorithm ==<br />
The Adam algorithm first computes the gradient, <math>g_t</math> of the objective function with respect to the parameters <math>\theta</math>, but then computes and stores first and second order moments of the gradient, <math>m_t</math> and <math>v_t</math><br />
respectively, as<br />
<br />
<math> m_t = \beta_1 \cdot m_{t-1} + (1-\beta_1) \cdot g_t </math> <br/><br />
<math> v_t = \beta_2 \cdot v_{t-1} + (1-\beta_2) \cdot g_t^2, </math> <br/><br />
<br />
where <math>\beta_1</math> and <math>\beta_2</math> are hyper-parameters that are <math>\in [0,1]</math>. These parameters can seen as exponential decay rates of the estimated moments, as the previous value is successively multiplied by the value less than 1 in each iteration. The authors of the original paper suggest values <math>\beta_1 = 0.9</math> and <math>\beta_2 = 0.999</math>. In the current notation, the first iteration of the algorithm is at <math>t=1</math> and both, <math>m_0</math> and <math>v_0</math> are initialized to zero. Since both moments are initialized to zero, at early time steps, these values are biased towards zero. To counter this, the authors proposed a corrected update to <math>m_t</math> and <math>v_t</math> as<br />
<br />
<math> \hat{m}_t = m_t / (1-\beta_1 ^t) </math> <br/><br />
<math> \hat{v}_t = v_t / (1-\beta_2 ^t). </math> <br/><br />
Finally, the parameter update is computed as<br />
<br />
<math> \theta_t = \theta_{t-1} - \alpha \cdot \hat{m}_t / (\sqrt{\hat{v}_t} + \epsilon), </math> <br/><br />
<br />
where <math>\epsilon</math> is a small constant for stability. The authors recommend a value of <math>\epsilon=10^{-8}</math>. <br />
<br />
== Numerical Example ==<br />
<br />
[[File:Contour.png|thumb|Contour plot of the loss function showing the trajectory of Adam algorithm from the initial point]]<br />
<br />
[[File:Model fit .png|thumb|Plot showing original data points and resulting model fit from the Adam algorithm]]<br />
<br />
<br />
To illustrate how updates occur in the Adam algorithm, consider a linear, least-squares regression problem formulation. The table below shows a sample data-set of student exam grades and the number of hours spent studying for the exam. The goal of this example will be to generate a linear model to predict exam grades as a function of time spent studying.<br />
<br />
{| class="wikitable"<br />
|-<br />
| Hours Studying || 9.0 || 4.9 || 1.6 || 1.9 || 7.9 || 2.0 || 11.5 || 3.9 || 1.1 || 1.6 || 5.1 || 8.2 || 7.3 || 10.4 || 11.2<br />
|-<br />
| Exam Grad || 88.0 || 72.3 || 66.5 || 65.1 || 79.5 || 60.8 || 94.3, || 66.7 || 65.4 || 63.8 || 68.4 || 82.5 || 75.9 || 87.8 || 85.2<br />
|}<br />
<br />
The hypothesized model function will be<br />
<br />
<math>f_\theta(x) = \theta_0 + \theta_1 x.</math><br />
<br />
The cost function is defined as<br />
<br />
<math> J({\theta}) = \frac{1}{2}\sum_i^n \big(f_\theta(x_i) - y_i \big)^2, </math><br />
<br />
Where the <math>1/2</math> coefficient is used only to make the derivatives cleaner. The optimization problem can then be formulated as trying to find the values of <math>\theta</math> that minimize the squared residuals of <math>f_\theta(x)</math> and <math>y</math>. <br />
<br />
<math> \mathrm{argmin}_{\theta} \quad \frac{1}{n}\sum_{i}^n \big(f_\theta(x_i) - y_i \big) ^2 </math><br />
<br />
For simplicity, parameters will be updated after every data point i.e. a batch size of 1. For a single data point the derivatives of the cost function with respect to <math>\theta_0</math> and <math>\theta_1</math> are<br />
<br />
<math> \frac{\partial J(\theta)}{\partial \theta_0} = \big(f_\theta(x) - y \big) </math><br/><br />
<math> \frac{\partial J(\theta)}{\partial \theta_1} = \big(f_\theta(x) - y \big) x </math><br />
<br />
The initial values of <math>{\theta}</math> will be set to [50, 1] and The learning rate, <math>\alpha</math>, is set to 0.1 and the suggested parameters for <math>\beta_1</math>, <math>\beta_2</math>, and <math>\epsilon</math> are used. With the first data sample of <math> (x,y)=[8.98, 88.01]</math>, the computed gradients are<br />
<br />
<math> \frac{\partial J(\theta)}{\partial \theta_0} = \big((50 + 1\cdot 9 - 88.01 \big) = -29.0 </math><br/><br />
<math> \frac{\partial J(\theta)}{\partial \theta_1} = \big((50 + 1\cdot 9 - 88.01 \big)\cdot 9.0 = -261 </math><br/><br />
<br />
With <math>m_0</math> and <math>v_0</math> being initialized to zero, the calculations of <math>m_1</math> and <math>v_1</math> are<br />
<br />
<math> m_1 = 0.9 \cdot 0 + (1-0.9) \cdot \begin{bmatrix} -29\\ -261 \end{bmatrix} = \begin{bmatrix} -2.9\\ -26.1\end{bmatrix} </math> <br/><br />
<math> v_1 = 0.999\cdot 0 + (1-0.999) \cdot \begin{bmatrix} -29^2\\-261^2 \end{bmatrix} = \begin{bmatrix} 0.84\\ 68.2\end{bmatrix} , </math> <br/><br />
<br />
The bias-corrected terms are computed as<br />
<br />
<math> \hat{m}_1 = \begin{bmatrix} -2.9\\ -26.1\end{bmatrix} \frac{1}{ (1-0.9^1)} = \begin{bmatrix} -29.0\\-261.1\end{bmatrix}</math> <br/><br />
<math> \hat{v}_1 = \begin{bmatrix} 0.84\\ 68.2\end{bmatrix} \frac{1} {(1-0.999^1)} = \begin{bmatrix} 851.5\\68168\end{bmatrix}. </math> <br/><br />
<br />
Finally, the parameter update is<br />
<br />
<math> \theta_0 = 50 - 0.1 \cdot -29 / (\sqrt{851.5} + 10^{-8}) = 50.1 </math> <br/><br />
<math> \theta_1 = 1 - 0.1 \cdot -261 / (\sqrt{68168} + 10^{-8}) = 1.1 </math> <br/><br />
<br />
This procedure is repeated until the parameters have converged, giving <math>\theta</math> values of <math>[58.98, 2.72]</math>. The figures to the right show the trajectory of the Adam algorithm over a contour plot of the objective function and the resulting model fit. It should be noted that the stochastic gradient descent algorithm with a learning rate of 0.1 diverges and with a rate of 0.01, SGD oscillates around the global minimum due to the large magnitudes of the gradient in the <math>\theta_1</math> direction.<br />
<br />
<br />
== Applications ==<br />
[[File:Adam training.png|thumb|Comparison of training a multilayer neural network on MNIST images for different gradient descent algorithms published in the original Adam paper (Kingma, 2015)<ref name="adam" />.]]<br />
<br />
The Adam optimization algorithm has been widely used in machine learning applications to train model parameters. When used with backpropagation, the Adam algorithm has been shown to be a very robust and efficient method for training artificial neural networks and is capable of working well with a variety of structures and applications. In their original paper, the authors present three different training examples, logistic regression, multi-layer neural networks for classification of MNIST images, and a convolutional neural network (CNN). The training results from the original Adam paper showing the objective function cost vs. the iteration over the entire data set for the multi-layer neural network is shown to the right.<br />
<br />
== Variants of Adam ==<br />
=== AdaMax ===<br />
AdaMax<ref name="adam" /> is a variant of the Adam algorithm proposed in the original Adam paper that uses an exponentially weighted infinity norm instead of the second-order moment estimate. The weighted infinity norm updated <math>u_t</math>, is computed as<br />
<br />
<math> u_t = \max(\beta_2 \cdot u_{t-1}, |g_t|). </math><br />
<br />
The parameter update then becomes<br />
<br />
<math> \theta_t = \theta_{t-1} - (\alpha / (1-\beta_1^t)) \cdot m_t / u_t. </math><br />
<br />
=== Nadam ===<br />
The Nadam algorithm<ref>Dozat, Timothy. Incorporating Nesterov Momentum into Adam. ICLR Workshop, no. 1, 2016, pp. 2013–16. </ref> was proposed in 2016 and incorporates the Nesterov Accelerate Gradient (NAG)<ref>Nesterov, Yuri. A method of solving a convex programming problem with convergence rate O(1/k^2). In Soviet Mathematics Doklady, 1983, pp. 372-376.</ref>, a popular momentum like SGD variation, into the first-order moment term. <br />
<br />
== Conclusion ==<br />
Adam is a variant of the gradient descent algorithm that has been widely adopted in the machine learning community. Adam can be seen as the combination of two other variants of gradient descent, SGD with momentum and RMSProp. Adam uses estimations of the first and second-order moments of the gradient to adapt the parameter update. These moment estimations are computed via moving averages,<math>m_t</math> and <math>v_t</math>, of the gradient and the squared gradient respectfully. In a variety of neural network training applications, Adam has shown increased convergence and robustness over other gradient descent algorithms and is often recommended as the default optimizer for training.<ref> "Neural Networks Part 3: Learning and Evaluation," CS231n: Convolutional Neural Networks for Visual Recognition, Stanford Unversity, 2020</ref><br />
<br />
== References ==<br />
<references/></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Adam&diff=2742Adam2020-12-21T11:42:55Z<p>Wc593: </p>
<hr />
<div>Authors: Nicholas Kincaid (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Adam <ref name="adam"> Kingma, Diederik P., and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015, pp. 1–15.</ref> is a variant of gradient descent that has become widely popular in the machine learning community. Presented in 2015, the Adam algorithm is often recommended as the default algorithm for training neural networks as it has shown improved performance over other variants of gradient descent algorithms for a wide range of problems. Adam's name is derived from adaptive moment estimation because uses estimates of the first and second moments of the gradient to perform updates, which can be seen as incorporating gradient descent with momentum (the first-order moment) and [https://optimization.cbe.cornell.edu/index.php?title=RMSProp RMSProp] algorithm<ref>Tieleman, Tijmen, and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, 2012.</ref> (the second-order moment).<br />
<br />
== Background ==<br />
=== Batch Gradient Descent ===<br />
In standard batch gradient descent, the parameters, <math>\theta</math>, of the objective function <math>f(\theta)</math>, are updated based on the gradient of <math>f</math> with respect to <br />
<math>\theta</math> for the entire training dataset, as<br />
<br />
<math> g_t =\nabla_{\theta_{t-1}} f \big(\theta_{t-1} \big) </math> <br/><br />
<math> \theta_t = \theta_{t-1} - \alpha g_t , </math> <br/><br />
<br />
where <math>\alpha</math> is defined as the learning rate and is a hyper-parameter of the optimization algorithm, and <math>t</math> is the iteration number. Key challenges of the standard gradient descent method are the tendency to get stuck in local minima and/or saddle points of the objective function, as well as choosing a proper learning rate, <math>\alpha</math>, which can lead to poor convergence.<ref>Ruder, Sebastian. An Overview of Gradient Descent Optimization Algorithms, 2016, pp. 1–14, http://arxiv.org/abs/1609.04747.</ref><br />
<br />
=== Stochastic Gradient Descent ===<br />
Another variant of gradient descent is [https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent stochastic gradient descent (SGD)], the gradient is computed and parameters are updated as in equation 1, but for each training sample in the training set. <br />
=== Mini-Batch Gradient Descent ===<br />
In between batch gradient descent and stochastic gradient descent, mini-batch gradient descent computes parameters updates on the gradient computed from a subset of the training set, where the size of the subset is often referred to as the batch size.<br />
<br />
== Adam Algorithm ==<br />
The Adam algorithm first computes the gradient, <math>g_t</math> of the objective function with respect to the parameters <math>\theta</math>, but then computes and stores first and second order moments of the gradient, <math>m_t</math> and <math>v_t</math><br />
respectively, as<br />
<br />
<math> m_t = \beta_1 \cdot m_{t-1} + (1-\beta_1) \cdot g_t </math> <br/><br />
<math> v_t = \beta_2 \cdot v_{t-1} + (1-\beta_2) \cdot g_t^2, </math> <br/><br />
<br />
where <math>\beta_1</math> and <math>\beta_2</math> are hyper-parameters that are <math>\in [0,1]</math>. These parameters can seen as exponential decay rates of the estimated moments, as the previous value is successively multiplied by the value less than 1 in each iteration. The authors of the original paper suggest values <math>\beta_1 = 0.9</math> and <math>\beta_2 = 0.999</math>. In the current notation, the first iteration of the algorithm is at <math>t=1</math> and both, <math>m_0</math> and <math>v_0</math> are initialized to zero. Since both moments are initialized to zero, at early time steps, these values are biased towards zero. To counter this, the authors proposed a corrected update to <math>m_t</math> and <math>v_t</math> as<br />
<br />
<math> \hat{m}_t = m_t / (1-\beta_1 ^t) </math> <br/><br />
<math> \hat{v}_t = v_t / (1-\beta_2 ^t). </math> <br/><br />
Finally, the parameter update is computed as<br />
<br />
<math> \theta_t = \theta_{t-1} - \alpha \cdot \hat{m}_t / (\sqrt{\hat{v}_t} + \epsilon), </math> <br/><br />
<br />
where <math>\epsilon</math> is a small constant for stability. The authors recommend a value of <math>\epsilon=10^{-8}</math>. <br />
<br />
== Numerical Example ==<br />
<br />
[[File:Contour.png|thumb|Contour plot of the loss function showing the trajectory of Adam algorithm from the initial point]]<br />
<br />
[[File:Model fit .png|thumb|Plot showing original data points and resulting model fit from the Adam algorithm]]<br />
<br />
<br />
To illustrate how updates occur in the Adam algorithm, consider a linear, least-squares regression problem formulation. The table below shows a sample data-set of student exam grades and the number of hours spent studying for the exam. The goal of this example will be to generate a linear model to predict exam grades as a function of time spent studying.<br />
<br />
{| class="wikitable"<br />
|-<br />
| Hours Studying || 9.0 || 4.9 || 1.6 || 1.9 || 7.9 || 2.0 || 11.5 || 3.9 || 1.1 || 1.6 || 5.1 || 8.2 || 7.3 || 10.4 || 11.2<br />
|-<br />
| Exam Grad || 88.0 || 72.3 || 66.5 || 65.1 || 79.5 || 60.8 || 94.3, || 66.7 || 65.4 || 63.8 || 68.4 || 82.5 || 75.9 || 87.8 || 85.2<br />
|}<br />
<br />
The hypothesized model function will be<br />
<br />
<math>f_\theta(x) = \theta_0 + \theta_1 x.</math><br />
<br />
The cost function is defined as<br />
<br />
<math> J({\theta}) = \frac{1}{2}\sum_i^n \big(f_\theta(x_i) - y_i \big)^2, </math><br />
<br />
Where the <math>1/2</math> coefficient is used only to make the derivatives cleaner. The optimization problem can then be formulated as trying to find the values of <math>\theta</math> that minimize the squared residuals of <math>f_\theta(x)</math> and <math>y</math>. <br />
<br />
<math> \mathrm{argmin}_{\theta} \quad \frac{1}{n}\sum_{i}^n \big(f_\theta(x_i) - y_i \big) ^2 </math><br />
<br />
For simplicity, parameters will be updated after every data point i.e. a batch size of 1. For a single data point the derivatives of the cost function with respect to <math>\theta_0</math> and <math>\theta_1</math> are<br />
<br />
<math> \frac{\partial J(\theta)}{\partial \theta_0} = \big(f_\theta(x) - y \big) </math><br/><br />
<math> \frac{\partial J(\theta)}{\partial \theta_1} = \big(f_\theta(x) - y \big) x </math><br />
<br />
The initial values of <math>{\theta}</math> will be set to [50, 1] and The learning rate, <math>\alpha</math>, is set to 0.1 and the suggested parameters for <math>\beta_1</math>, <math>\beta_2</math>, and <math>\epsilon</math> are used. With the first data sample of <math> (x,y)=[8.98, 88.01]</math>, the computed gradients are<br />
<br />
<math> \frac{\partial J(\theta)}{\partial \theta_0} = \big((50 + 1\cdot 9 - 88.01 \big) = -29.0 </math><br/><br />
<math> \frac{\partial J(\theta)}{\partial \theta_1} = \big((50 + 1\cdot 9 - 88.01 \big)\cdot 9.0 = -261 </math><br/><br />
<br />
With <math>m_0</math> and <math>v_0</math> being initialized to zero, the calculations of <math>m_1</math> and <math>v_1</math> are<br />
<br />
<math> m_1 = 0.9 \cdot 0 + (1-0.9) \cdot \begin{bmatrix} -29\\ -261 \end{bmatrix} = \begin{bmatrix} -2.9\\ -26.1\end{bmatrix} </math> <br/><br />
<math> v_1 = 0.999\cdot 0 + (1-0.999) \cdot \begin{bmatrix} -29^2\\-261^2 \end{bmatrix} = \begin{bmatrix} 0.84\\ 68.2\end{bmatrix} , </math> <br/><br />
<br />
The bias-corrected terms are computed as<br />
<br />
<math> \hat{m}_1 = \begin{bmatrix} -2.9\\ -26.1\end{bmatrix} \frac{1}{ (1-0.9^1)} = \begin{bmatrix} -29.0\\-261.1\end{bmatrix}</math> <br/><br />
<math> \hat{v}_1 = \begin{bmatrix} 0.84\\ 68.2\end{bmatrix} \frac{1} {(1-0.999^1)} = \begin{bmatrix} 851.5\\68168\end{bmatrix}. </math> <br/><br />
<br />
Finally, the parameter update is<br />
<br />
<math> \theta_0 = 50 - 0.1 \cdot -29 / (\sqrt{851.5} + 10^{-8}) = 50.1 </math> <br/><br />
<math> \theta_1 = 1 - 0.1 \cdot -261 / (\sqrt{68168} + 10^{-8}) = 1.1 </math> <br/><br />
<br />
This procedure is repeated until the parameters have converged, giving <math>\theta</math> values of <math>[58.98, 2.72]</math>. The figures to the right show the trajectory of the Adam algorithm over a contour plot of the objective function and the resulting model fit. It should be noted that the stochastic gradient descent algorithm with a learning rate of 0.1 diverges and with a rate of 0.01, SGD oscillates around the global minimum due to the large magnitudes of the gradient in the <math>\theta_1</math> direction.<br />
<br />
<br />
== Applications ==<br />
[[File:Adam training.png|thumb|Comparison of training a multilayer neural network on MNIST images for different gradient descent algorithms published in the original Adam paper (Kingma, 2015)<ref name="adam" />.]]<br />
<br />
The Adam optimization algorithm has been widely used in machine learning applications to train model parameters. When used with backpropagation, the Adam algorithm has been shown to be a very robust and efficient method for training artificial neural networks and is capable of working well with a variety of structures and applications. In their original paper, the authors present three different training examples, logistic regression, multi-layer neural networks for classification of MNIST images, and a convolutional neural network (CNN). The training results from the original Adam paper showing the objective function cost vs. the iteration over the entire data set for the multi-layer neural network is shown to the right.<br />
<br />
== Variants of Adam ==<br />
=== AdaMax ===<br />
AdaMax<ref name="adam" /> is a variant of the Adam algorithm proposed in the original Adam paper that uses an exponentially weighted infinity norm instead of the second-order moment estimate. The weighted infinity norm updated <math>u_t</math>, is computed as<br />
<br />
<math> u_t = \max(\beta_2 \cdot u_{t-1}, |g_t|). </math><br />
<br />
The parameter update then becomes<br />
<br />
<math> \theta_t = \theta_{t-1} - (\alpha / (1-\beta_1^t)) \cdot m_t / u_t. </math><br />
<br />
=== Nadam ===<br />
The Nadam algorithm<ref>Dozat, Timothy. Incorporating Nesterov Momentum into Adam. ICLR Workshop, no. 1, 2016, pp. 2013–16. </ref> was proposed in 2016 and incorporates the Nesterov Accelerate Gradient (NAG)<ref>Nesterov, Yuri. A method of solving a convex programming problem with convergence rate O(1/k^2). In Soviet Mathematics Doklady, 1983, pp. 372-376.</ref>, a popular momentum like SGD variation, into the first-order moment term. <br />
<br />
== Conclusion ==<br />
Adam is a variant of the gradient descent algorithm that has been widely adopted in the machine learning community. Adam can be seen as the combination of two other variants of gradient descent, SGD with momentum and RMSProp. Adam uses estimations of the first and second-order moments of the gradient to adapt the parameter update. These moment estimations are computed via moving averages,<math>m_t</math> and <math>v_t</math>, of the gradient and the squared gradient respectfully. In a variety of neural network training applications, Adam has shown increased convergence and robustness over other gradient descent algorithms and is often recommended as the default optimizer for training.<ref> "Neural Networks Part 3: Learning and Evaluation," CS231n: Convolutional Neural Networks for Visual Recognition, Stanford Unversity, 2020</ref><br />
<br />
== References ==<br />
<references/></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent&diff=2741Stochastic gradient descent2020-12-21T11:41:40Z<p>Wc593: </p>
<hr />
<div>Authors: Jonathon Price, Alfred Wong, Tiancheng Yuan, Joshua Mathews, Taiwo Olorunniwo (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
'''Stochastic gradient descent''' (abbreviated as '''SGD''') is an iterative method often used for [https://en.wikipedia.org/wiki/Machine_learning machine learning], optimizing the [https://en.wikipedia.org/wiki/Gradient_descent gradient descent] during each search once a random weight vector is picked. The gradient descent is a strategy that searches through a large or infinite hypothesis space whenever 1) there are hypotheses continuously being parameterized and 2) the errors are differentiable based on the parameters. The problem with gradient descent is that [https://en.wikipedia.org/wiki/Convergence_(logic) converging] to a [https://en.wikipedia.org/wiki/Maxima_and_minima local minimum] takes extensive time and determining a global minimum is not guaranteed.<ref name=McGrawHill2003>Mitchell, T. M. (1997). Machine Learning (1st ed.). McGraw-Hill Education. Page 92. ISBN 0070428077.</ref> In SGD, the user initializes the weights and the process updates the weight vector using one data point<ref name="bishop" />. The gradient descent continuously updates it incrementally when an error calculation is completed to improve convergence.<ref name="Needell=">Needell, D., Srebro, N., & Ward, R. (2015, January). Stochastic gradient descent weighted sampling, and the randomized Kaczmarz algorithm. https://arxiv.org/pdf/1310.5715.pdf</ref> The method seeks to determine the steepest descent and it reduces the number of [https://en.wikipedia.org/wiki/Iteration iterations] and the time taken to search large quantities of data points. Over the recent years, the data sizes have increased immensely such that current processing capabilities are not enough.<ref name=Bottou1991>Bottou, L. (1991) Stochastic gradient learning in neural networks. Proceedings of Neuro-Nımes, 91. https://leon.bottou.org/publications/pdf/nimes-1991.pdf</ref> Stochastic gradient descent is being used in [https://en.wikipedia.org/wiki/Neural_network neural networks] and decreases machine computation time while increasing complexity and performance for large-scale problems.<ref name=bottou2012>Bottou, L. (2012) Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade, 421– 436. Springer.</ref><br />
<br />
== Theory ==<br />
[[File:Gradient Descent Visualization.png|alt=Visualization of the gradient descent algorithm|thumb|Visualization of the gradient descent algorithm<ref name=":0">Lau, S., Gonzalez, J., Nolan, D. (2020) <nowiki>https://www.textbook.ds100.org/ch/11/gradient_stochastic.html</nowiki></ref>]]<br />
SGD is a variation on gradient descent, also called batch gradient descent. As a review, gradient descent seeks to minimize an objective function <math>J(\theta)</math> by iteratively updating each parameter <math>\theta</math> by a small amount based on the negative gradient of a given data set. The steps for performing gradient descent are as follows:<blockquote>Step 1: Select a learning rate <math>\alpha</math><br />
<br />
Step 2: Select initial parameter values <math>\theta</math> as the starting point<br />
<br />
Step 3: Update all parameters from the gradient of the training data set, i.e. compute <math>\theta_{i+1}=\theta_i-\alpha\times{\nabla_\theta}J(\theta)</math><br />
<br />
Step 4: Repeat Step 3 until a local minima is reached</blockquote><br />
<br />
Under batch gradient descent, the gradient, <math>{\nabla_\theta}J(\theta)</math>, is calculated at every step against a full [[wikipedia:Data_set|data set]]. When the training data is large, [[wikipedia:Computation|computation]] may be slow or require large amounts of [[wikipedia:Computer_memory#:~:text=In%20computing%2C%20memory%20refers%20to,or%20related%20computer%20hardware%20device.&text=Examples%20of%20non%2Dvolatile%20memory,storing%20firmware%20such%20as%20BIOS).|computer memory]].<ref name="bishop">Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer.</ref><br />
[[File:Visualization of stochastic gradient descent.png|alt=Visualization of the stochastic gradient descent algorithm|thumb|Visualization of the stochastic gradient descent algorithm<ref name=":0" />]]<br />
<br />
===== Stochastic Gradient Descent Algorithm =====<br />
SGD modifies the batch gradient descent [https://en.wikipedia.org/wiki/Algorithm algorithm] by calculating the gradient for only one training example at every iteration.<ref name=ruder>Ruder, S. (2020, March 20). An overview of gradient descent optimization algorithms. Sebastian Ruder. https://ruder.io/optimizing-gradient-descent/index.html#batchgradientdescent</ref> The steps for performing SGD are as follows: <blockquote>Step 1: Randomly shuffle the data set of size m <br />
<br />
Step 2: Select a learning rate <math>\alpha</math> <br />
<br />
Step 3: Select initial parameter values <math>\theta</math> as the starting point <br />
<br />
Step 4: Update all parameters from the gradient of a single training example <math>x^j, y^j</math>, i.e. compute <math>\theta_{i+1}=\theta_i-\alpha\times{\nabla_\theta}J(\theta;x^j;y^j)</math> <br />
<br />
Step 5: Repeat Step 4 until a local minimum is reached </blockquote>By calculating the gradient for one data set per iteration, SGD takes a less direct route towards the local minimum. However, SGD has the advantage of having the ability to [https://en.wikipedia.org/wiki/Increment_and_decrement_operators incrementally] update an objective function <math>J(\theta)</math> when new training data is available at minimum cost.<br />
<br />
===== Learning Rate =====<br />
The [https://en.wikipedia.org/wiki/Learning_rate learning rate] is used to calculate the step size at every iteration. Too large a learning rate and the step sizes may overstep too far past the optimum value. Too small a learning rate may require many iterations to reach a [https://en.wikipedia.org/wiki/Maxima_and_minima local minimum]. A good starting point for the learning rate is 0.1 and adjust as necessary.<ref>Srinivasan, A. (2019, September) Stochastic Gradient Descent — Clearly Explained. https://towardsdatascience.com/stochastic-gradient-descent-clearly-explained-53d239905d31</ref><br />
===== Mini-Batch Gradient Descent =====<br />
A variation on stochastic gradient descent is the mini-batch gradient descent. In SGD, the gradient is computed on only one training example and may result in a large number of iterations required to converge on a local minimum. Mini-batch gradient descent offers a compromise between batch gradient descent and SGD by splitting the training data into smaller batches. The steps for performing mini-batch gradient descent are identical to SGD with one exception - when updating the parameters from the gradient, rather than calculating the gradient of a single training example, the gradient is calculated against a batch size of <math>n</math> training examples, i.e. compute <math>\theta_{i+1}=\theta_i-\alpha\times{\nabla_\theta}J(\theta;x^{j:j+n};y^{j:j+n})</math><br />
<br />
== Numerical Example ==<br />
===== Data preparation =====<br />
Consider a simple 2-D data set with only 6 data points (each point has <math>x_1, x_2</math>), and each data point have a label value <math>y</math> assigned to them.<br />
===== Model overview =====<br />
For the purpose of demonstrating the computation of the SGD process, simply employ a linear regression model: <math>y = w_1\ x_1 + w_2\ x_2 + b </math>, where <math>w_1</math> and <math>w_2</math> are weights and <math>b</math> is the constant term. In this case, the goal of this model is to find the best value for <math>w_1, w_2</math> and <math>b</math>, based on the datasets.<br />
===== Definition of loss function =====<br />
In this example, the loss function should be l2 norm square, that is <math>L = (\widehat{y} - y)^2 </math>.<br />
===== Forward =====<br />
<blockquote>'''Initial Weights:'''<br />
The linear regression model starts by [https://en.wikipedia.org/wiki/Initialization_(programming) initializing] the weights <math>w_1, w_2</math> and setting the bias term at 0. In this case, initiate [<math>w_1, w_2</math>] = [-0.044, -0.042].<br />
<br />
'''Dataset:'''<br />
<br />
For this problem, the batch size is set to 1 and the entire dataset of [ <math>x_1</math>, <math>x_2</math>, <math>y</math>] is given by:<br />
{| class="wikitable"<br />
! <math>x_1</math> !! <math>x_2</math> !! <math>y</math><br />
|-<br />
| 4 || 1 || 2<br />
|-<br />
| 2 || 8 || -14<br />
|-<br />
| 1 || 0 || 1<br />
|-<br />
| 3 || 2 || -1<br />
|-<br />
| 1 || 4 || -7<br />
|-<br />
| 6 || 7 || -8<br />
|}<br />
<br />
===== Gradient Computation and Parameter Update =====<br />
The purpose of BP is to obtain the impact of the weights and bias terms for the entire model. The update of the model is entirely dependent on the gradient values. To minimize the loss during the process, the model needs to ensure the gradient is dissenting so that it could finally converge to a global optimal point. All the 3 partial differential equations are shown as:<br />
<br />
<math>\omega_1^' = \omega_1 - \eta\ {\partial L\over\partial \omega_1} = \omega_1 - \eta\ {\partial L\over\partial \widehat{y}}\cdot {\partial \widehat{y}\over\partial \omega_1} = \omega_1 - \eta\ [2(\widehat{y} - y)\cdot x_1] </math><br />
<br />
<math>\omega_2^' = \omega_2 - \eta\ {\partial L\over\partial \omega_2} = \omega_2 - \eta\ {\partial L\over\partial \widehat{y}}\cdot {\partial \widehat{y}\over\partial \omega_2} = \omega_2 - \eta\ [2(\widehat{y} - y)\cdot x_2]</math><br />
<br />
<math>b^' = b - \eta\ {\partial L\over\partial b} = b - \eta\ {\partial L\over\partial \widehat{y}}\cdot {\partial \widehat{y}\over\partial b} = b - \eta\ [2(\widehat{y} - y)\cdot 1]</math><br />
<br />
Where the <math>\eta</math> stands for the learning rate and in this model, is set to be 0.05. To update each parameter, simply substitute the value of resulting <math>\widehat{y}</math>.<br />
<br />
Use the first data point [<math>x_1, x_2</math>] = [4, 1] and the corresponding <math>y</math> being 2. The <math>\widehat{y}</math> the model gave should be -0.2. Now with <math>\widehat{y}</math> and <math>y</math> value, update the new parameters as [0.843, 0.179, 0.222] = [<math>w'_1, w'_2, b'</math>]. That marks the end of iteration 1.<br />
<br />
Now, iteration 2 begins, with the next data point [2, 8] and the label -14. The estimation , <math>\widehat{y}</math> is now 3.3. With the new <math>\widehat{y}</math> and <math>y</math> value, once again, we update the weight as [-2.625, -13.696, 1.513]. And that marks the end of iteration 2.<br />
<br />
Keep on updating the model through additional iterations to output [<math>w_1, w_2, b</math>] = [-19.021, -35.812, -1.232].<br />
<br />
This is just a simple demonstration of the SGD process. In actual practice, more epochs can be utilized to run through the entire dataset enough times to ensure the best learning results based on the training dataset<ref name=":1">Lawrence, S., & Giles, C. L. (2000). Overfitting and neural networks: conjugate gradient and backpropagation. Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, 1, 114–119. https://doi.org/10.1109/ijcnn.2000.857823</ref>. But learning overly specific with the training dataset could sometimes also expose the model to the risk of overfitting<ref name=":1" />. Therefore, tuning such parameters is quite tricky and often costs days or even weeks before finding the best results.<br />
<br />
==Application==<br />
SGD, often referred to as the cornerstone for deep learning, is an algorithm for training a wide range of models in machine learning. [[wikipedia:Deep_learning|Deep learning]] is a machine learning technique that teaches computers to do what comes naturally to humans. In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Models are trained by using a large set of labeled data and neural network architectures that contain many layers. Neural networks make up the backbone of deep learning algorithms. A neural network that consists of more than three layers which would be inclusive of the inputs and the output can be considered a deep learning algorithm. Due to SGD’s efficiency in dealing with large scale datasets, it is the most common method for training [https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks deep neural networks]. Furthermore, SGD has received considerable attention and is applied to text classification and [https://en.wikipedia.org/wiki/Natural_language_processing natural language processing]. It is best suited for unconstrained optimization problems and is the main way to train large linear models on very large data sets. Implementation of stochastic gradient descent include areas in [https://en.wikipedia.org/wiki/Tikhonov_regularization ridge regression] and regularized [https://en.wikipedia.org/wiki/Logistic_regression logistic regression]. Other problems, such as Lasso<ref name="Shwartz">Shalev-Shwartz, S. and Tewari, A. (2011) Stochastic methods for ℓ<math>_1</math>-regularized loss minimization. The Journal of Machine Learning Research, 12, 1865–1892. https://www.jmlr.org/papers/volume12/shalev-shwartz11a/shalev-shwartz11a.pdf</ref> and support vector machines<ref name=Menon>Menon, A. (2009, February). Large-Scale Support Vector Machines: Algorithms and Theory. http://cseweb.ucsd.edu/~akmenon/ResearchExamTalk.pdf</ref> can be solved by stochastic gradient descent.<br />
<br />
===Support Vector Machine===<br />
SGD is a simple yet very efficient approach to fitting linear classifiers and regressors under convex functions such as (linear) [https://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machines] (SVM). A support vector machine is a supervised machine learning model that uses classification algorithms for two-group classification problems. An SVM finds what is known as a separating hyperplane: a hyperplane (a line, in the two-dimensional case) which separates the two classes of points from one another. It is a fast and dependable classification algorithm that performs very well with a limited amount of data to analyze. However, because SVM is computationally costly, software applications often do not provide sufficient performance in order to meet time requirements for large amounts of data. To improve SVM scalability regarding the size of the data set, SGD algorithms are used as a simplified procedure for evaluating the gradient of a function.<ref name=lopes>Lopes, F.F.; Ferreira, J.C.; Fernandes, M.A.C. Parallel Implementation on FPGA of Support Vector Machines Using Stochastic Gradient Descent. Electronics 2019, 8, 631.</ref><br />
<br />
===Logistic regression===<br />
Logistic regression models the [https://en.wikipedia.org/wiki/Probability probabilities] for classification problems with two possible outcomes. It's an extension of the linear regression model for classification problems. It is a statistical technique with the input variables as continuous variables and the output variable as a binary variable. It is a class of regression where the independent variable is used to predict the dependent variable. The objective of training a machine learning model is to minimize the loss or error between ground truths and predictions by changing the trainable parameters. Logistic regression has two phases: training, and testing. The system, specifically the weights w and b, is trained using stochastic gradient descent and the cross-entropy loss. <br />
<br />
===Full Waveform Inversion (FWI)===<br />
The Full Waveform Inversion (FWI) is a [https://en.wikipedia.org/wiki/Geophysical_imaging seismic imaging] process by drawing information from the physical parameters of samples. Companies use the process to produce high-resolution high velocity depictions of subsurface activities. SGD supports the process because it can identify the minima and the overall global minimum in less time as there are many local minimums.<ref name=witte>Witte, P., Louboutin, M., Lensink, K., Lange, M., Kukreja, N., Luporini, F., Gorman, G., Herrmann, F.J.; Full-waveform inversion, Part 3: Optimization. The Leading Edge ; 37 (2): 142–145. doi: https://doi.org/10.1190/tle37020142.1</ref><br />
<br />
==Conclusion==<br />
SGD is an algorithm that seeks to find the steepest descent during each iteration. The process decreases the time it takes to search large data sets and determine local minima immensely. The SGD provides many applications in machine learning, geophysics, least mean squares (LMS), and other areas.<br />
<br />
==References==<br />
<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Adaptive_robust_optimization&diff=2740Adaptive robust optimization2020-12-21T11:41:15Z<p>Wc593: </p>
<hr />
<div>Author: Ralph Wang (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Adaptive Robust Optimization (ARO), also known as adjustable robust optimization, models situations where decision makers make two types of decisions: here-and-now decisions that must be made immediately, and wait-and-see decisions that can be made at some point in the future.<ref>Yanikognlu, I., Gorissen, B. L., den Hertog, D. (2019) A Survey of Adjustable Robust Optimization. <i>European Journal of Operational Research</i>, (277)3:799-813.</ref> ARO improves on the robust optimization framework by accounting for any information the decision maker does not know now, but may learn before making future decisions. In the real-world, ARO is applicable whenever past decisions and new information together influence future decisions. Common applications include power systems control, inventory management, shift scheduling, and other resource allocation problems.<ref>B. Hu and L. Wu, "Robust SCUC Considering Continuous/Discrete Uncertainties and Quick-Start Units: A Two-Stage Robust Optimization With Mixed-Integer Recourse," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 2, pp. 1407-1419, March 2016, doi: 10.1109/TPWRS.2015.2418158.</ref><ref>J. Warrington, C. Hohl, P. J. Goulart and M. Morari, "Rolling Unit Commitment and Dispatch With Multi-Stage Recourse Policies for Heterogeneous Devices," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 1, pp. 187-197, Jan. 2016, doi: 10.1109/TPWRS.2015.2391233.</ref><ref>Chuen-Teck See, Melvyn Sim, (2010) Robust Approximation to Multiperiod Inventory Management. <i>Operations Research</i> 58(3):583-594.</ref><ref>Marcus Ang, Yun Fong Lim, Melvyn Sim, (2012) Robust Storage Assignment in Unit-Load Warehouses. <i>Management Science</i> 58(11):2114-2130.</ref><ref>Mattia, S., Rossi, F., Servilio, M., Smriglio, S. (2017). Staffing and Scheduling Flexible Call Centers by Two-Stage Robust Optimization. <i>Omega</i> 72:25-37.</ref><ref>Gong, J. and You, F. (2017), Optimal processing network design under uncertainty for producing fuels and value‐added bioproducts from microalgae: Two‐stage adaptive robust mixed integer fractional programming model and computationally efficient solution algorithm. <i>AIChE J.</i>, 63: 582-600.</ref><br />
<br />
Compared to traditional robust optimization models, ARO gives less conservative and more realistic solutions, however, this improvement comes at the cost of computation time. Indeed, even the general linear ARO with linear uncertainty is proven computationally intractable.<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref> However, researchers have developed a wide variety of solution and approximation methods for specific types of industrial ARO problems over the past 15 years and the field continues to grow rapidly.<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref><ref>Zhao, L., & Zeng, B. (2012). An Exact Algorithm for Two-stage Robust Optimization with Mixed Integer Recourse Problems.</ref><ref>Chen, Bokan, "A new trilevel optimization algorithm for the two-stage robust unit commitment problem" (2013). Graduate Theses and Dissertations. 13065.</ref><ref>Shi, H. and You, F. (2016), A computational framework and solution algorithms for two‐stage adaptive robust scheduling of batch manufacturing processes under uncertainty. <i>AIChE J.</i>, 62: 687-703.</ref><ref>Bertsimias, D., Georghiou, A. (2015). Design of Near Optimal Decision Rules in Multistage Adaptive Mixed-Integer Optimization. <i>Operations Research</i> 63(3): 610-627.</ref><br />
<br />
== Problem Formulation ==<br />
Suppose, for an optimization problem of interest, <math>S</math> is the set of allowed decisions and <math>x</math> is a decision in <math>S</math>. Let <math>u</math> be a vector representing the set of parameters of interest in this problem. If the goal is to minimize some function <math>f(u, x)</math>, and we want <math>x</math> to adhere to a set of constraints <math>g(u, x) \leq 0</math>, then the problem may be formulated as:<br />
<br />
<math>\begin{align}\text{minimize, choosing x: } f&(u, x)\\<br />
<br />
\text{under constraints: } g&(u, x) \leq 0\end{align}</math><br />
<br />
Or more simply:<br />
<br />
<math>\begin{align}\text{min}(x) \; &f(u, x)\\<br />
\text{s.t. } \; &g(u, x) \leq 0\end{align}</math><br />
<br />
In this formulation, we call <math>f</math> the objective function and <math>g</math> the constraint function.<br />
If <math>u</math> is known, then the problem can be solved using methods such as branch and cut or Karush-Kuhn-Tucker conditions. However, in many real world scenarios, <math>u</math> is not known. To address this uncertainty, the robust optimization approach generates the set of possible values of <math>u</math>, called the uncertainty set <math>U</math>, and solves for the decision <math>x</math> such that the constraint <math>g</math> is satisfied in all cases and <math>f</math> is optimized for the worst case. The problem can be written as:<br />
<br />
<math>\begin{align}\text{min}(x)\text{ max}(u)\;&f(u, x)\\<br />
\text{s.t.}\;&g(u, x) \leq 0 \end{align}</math><br />
<br />
Adaptive robust optimization expands this robust optimization framework by separating the decision <math>x</math> into multiple stages. For simplicity, assume there are two stages of decisions. In the first stage, only the urgent, here-and-now decisions are made. After these decisions are made, the true values of the parameters <math>u</math> are revealed, then the remaining, wait-and-see decisions are decided. The model is like a game of poker: the player needs to make initial bets based on incomplete information (the cards in his hand), then makes further bets as more and more cards are dealt. Mathematically, let the set of possible decisions in the first stage be <math>S_1</math> and the set of possible decisions in the second stage be <math>S_2</math>, so that the objective and constraint functions become functions of the parameters <math>u</math>, the first stage decision <math>x_1</math> (<math>x_1</math> in <math>S_1</math>), and the second stage decision <math>x_2</math> (<math>x_2</math> in <math>S_2</math>). Then, we can formulate the problem as:<br />
<br />
<math>\begin{align} \text{min}(x_1)\text{ max}(u)\text{ min}(x_2)\;&f(u, x_1, x_2)\\<br />
\text{s.t.}\;\;\;&g(u, x_1, x_2) \leq 0 \; \text{for all } u \text{ in } U\end{align}</math><br />
<br />
The reasoning used in this construction can be extended to multi-stage formulations.<br />
<br />
In the literature, adaptive robust optimization problems are usually formulated differently but equivalently. Note that because <math>x_2</math> is selected only after the uncertain parameter <math>u</math> is revealed, <math>x_2</math> is a function of <math>u</math>. Expressing <math>x_2</math> as a function of <math>u</math> allows us to choose the function <math>x_2(u)</math> before learning <math>u</math>, which allows the problem to be rewritten as:<br />
<br />
<math>\begin{align}\text{min}(x_1, x_2(u))\text{ max}(u) \; &f(u, x_1, x_2(u))\\<br />
\text{s.t.} \; &g(u, x_1, x_2(u)) \leq 0 \; \text{for all } u \text{ in } U \end{align}</math><br />
<br />
And if we introduce a variable <math>t = \text{max}(u)\;f(u, x_1, x_2(u))</math>, then we can rewrite the problem as:<br />
<br />
<math>\begin{align}\text{min}(x_1, x_2(u), t)\;\;&t\\<br />
\text{s.t.} \; &f(u, x_1, x_2(u)) \leq t \text{ for all }u\text{ in }U\\<br />
&g(u, x_1, x_2(u)) \leq 0 \text{ for all }u\text{ in }U\end{align}</math><br />
<br />
Which allows us to remove <math>u</math> from the objective function. Since <math>x_1</math> represents all the variables the decide immediately, <math>t</math> can be collapsed into <math>x_1</math>; similarly, the first constraint can be collapsed into the second. This generates the formulation most commonly seen in the literature (up to a change of variable names and functions <math>f</math> and <math>g</math>):<br />
<br />
<math>\begin{align}\text{min}(x_1, x_2(u)) \;&f(x_1)\\<br />
\text{s.t.}\; &g(u, x_1, x_2(u)) \leq 0 \text{ for all }u\text{ in }U\end{align}</math><br />
<br />
Where <math>f(x_1)</math> was redefined to be the part of <math>x_1</math> representing <math>t</math>.<br />
<br />
For many problems of interest, the functions <math>f</math> and <math>g</math> vary linearly with <math>x_1</math> and <math>x_2</math>, that is, they are affine functions of <math>x_1</math> and <math>x_2</math>.<ref>B. Hu and L. Wu, "Robust SCUC Considering Continuous/Discrete Uncertainties and Quick-Start Units: A Two-Stage Robust Optimization With Mixed-Integer Recourse," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 2, pp. 1407-1419, March 2016, doi: 10.1109/TPWRS.2015.2418158.</ref><ref>J. Warrington, C. Hohl, P. J. Goulart and M. Morari, "Rolling Unit Commitment and Dispatch With Multi-Stage Recourse Policies for Heterogeneous Devices," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 1, pp. 187-197, Jan. 2016, doi: 10.1109/TPWRS.2015.2391233.</ref><ref>Chuen-Teck See, Melvyn Sim, (2010) Robust Approximation to Multiperiod Inventory Management. <i>Operations Research</i> 58(3):583-594.</ref><ref>Marcus Ang, Yun Fong Lim, Melvyn Sim, (2012) Robust Storage Assignment in Unit-Load Warehouses. <i>Management Science</i> 58(11):2114-2130.</ref><ref>Mattia, S., Rossi, F., Servilio, M., Smriglio, S. (2017). Staffing and Scheduling Flexible Call Centers by Two-Stage Robust Optimization. <i>Omega</i> 72:25-37.</ref><ref>Gong, J. and You, F. (2017), Optimal processing network design under uncertainty for producing fuels and value‐added bioproducts from microalgae: Two‐stage adaptive robust mixed integer fractional programming model and computationally efficient solution algorithm. <i>AIChE J.</i>, 63: 582-600.</ref> In such cases, if <math>x_1</math> and <math>x_2</math> are treated as vectors, then we can write:<br />
<br />
<math>f(x_1) = c^Tx_1</math><br />
<br />
Where <math>c</math> is some vector, and<br />
<br />
<math>g(u, x_1, x_2) = A_1(u)x_1 + A_2(u)x_2(u) - b(u)</math><br />
<br />
Where the <math>A(u)</math>'s are matrices and <math>b(u)</math> is a vector, to give the linear, two-stage ARO (<i>L2ARO</i>):<br />
<br />
<math>\begin{align}\text{min}(x_1, x_2(u))\;&c^Tx_1\\<br />
\text{s.t.}\;&A_1(u)x_1 + A_2(u)x_2(u) \leq b(u)\;\text{ for all }u\text{ in }U\end{align}</math><br />
<br />
This L2ARO will be the primary focus of the Algorithms section.<br />
<br />
==Algorithms and Methodology==<br />
General ARO problems are computationally intractable.<ref>Guslitser, E. (2002). Uncertainty-Immunized Solutions in Linear Programming (Master’s Thesis, Technion-Israel Institute of Technology).</ref> Taking the L2ARO for example, deriving the optimal function <math>x_2(u)</math> poses a tremendous challenge for many choices of uncertainty set <math>U</math>. If <math>U</math> is large or infinite, or is non convex, deciding what <math>x_2</math> should be for each <math>u</math> in <math>U</math> may take a long time. In real world applications, then, the uncertainty set <math>U</math> must be chosen carefully to include a representative set of possible parameter values for <math>u</math>, but it must not be too large or complex and render the problem intractable. <br />
The L2ARO model has been proven tractable only for simple uncertainty sets <math>U</math> or with restrictions imposed on the function <math>x_2(u)</math>.<ref>Yanikognlu, I., Gorissen, B. L., den Hertog, D. (2019) A Survey of Adjustable Robust Optimization. <i>European Journal of Operational Research</i>, (277)3:799-813.</ref><ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref> Therefore, ARO problems are usually solved on a case by case basis, using methods such as multi-level optimization, branch-and-cut, and decomposition.<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref><ref>Zhao, L., & Zeng, B. (2012). An Exact Algorithm for Two-stage Robust Optimization with Mixed Integer Recourse Problems.</ref><ref>Chen, Bokan, "A new trilevel optimization algorithm for the two-stage robust unit commitment problem" (2013). Graduate Theses and Dissertations. 13065.</ref><ref>Shi, H. and You, F. (2016), A computational framework and solution algorithms for two‐stage adaptive robust scheduling of batch manufacturing processes under uncertainty. <i>AIChE J.</i>, 62: 687-703.</ref><ref>Bertsimias, D., Georghiou, A. (2015). Design of Near Optimal Decision Rules in Multistage Adaptive Mixed-Integer Optimization. <i>Operations Research</i> 63(3): 610-627.</ref> This section will first present the L2ARO solution method using the <i>affine decision rule</i> approximation under <i>fixed recourse</i> conditions from Ben-Tal's 2004 paper<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref>, then discuss how this method might be extended to other L2ARO problems.<br />
<br />
General L2ARO problems were first proven intractable by Guslitser, in his master's thesis.<ref>Guslitser, E. (2002). Uncertainty-Immunized Solutions in Linear Programming (Master’s Thesis, Technion-Israel Institute of Technology).</ref> Ben-Tal took this result and suggested simplifying the problem by restricting <math>x_2(u)</math> to vary linearly with <math>u</math>, that is, <br />
<br />
<math>x_2(u) = w + Wu</math><br />
<br />
Where <math>w</math> is a vector and <math>W</math> is a matrix, both variable. This simplification is known as the <i>affine decision rule</i> (ADR). To further simplify the problem, Ben-Tal proposed that the matrix <math>A_2(u)</math> be fixed to some matrix <math>V</math> (<i>fixed recourse conditions</i>), and make <math>A_1(u)</math> and <math>b(u)</math> affine functions of <math>u</math>:<br />
<br />
<math>A_1(u) = m + M(u)</math><br />
<br />
<math>b(u) = b + Bu</math><br />
<br />
Where <math>m</math> and <math>b</math> are fixed vectors and <math>M</math> and <math>B</math> are fixed matrices. Then, the overall problem can be rewritten:<br />
<br />
<math>\begin{align}\text{min}(x_1, w, W) \; &c^Tx_1\\<br />
\text{s.t.}\;&(m + Mu)x_1 + V(w + Wu) \leq b + Bu \; \text{ for all }u\text{ in }U\end{align}</math><br />
<br />
Now, both the objective function and constraint function are affine functions of <math>x_1</math>, <math>w</math>, and <math>W</math>, so the problem has been reduced to a simple robust linear program, for which solution methods already exist.<br />
<br />
The above solution method, although simple and tractable, suffers from potential sub optimality of ADR. Indeed, Ben-Tal motivates this assumption citing only the tractability of the result. In real world scenarios, this sub optimality can be mitigated by using ADR to make the initial decision, then resolving the problem after <math>u</math> is revealed. That is, if solving the L2ARO gives <math>x_1^*</math> as the optimal <math>x_1</math> and <math>x_2^*(u)</math> as the optimal <math>x_2(u)</math>, decision <math>x_1^*</math> is implemented immediately; when <math>u</math> is revealed (to be, say, <math>u^*</math>), decision <math>x_2</math> is decided not by computing <math>x_2^*(u^*)</math>, but by re-solving the whole problem fixing <math>x_1</math> to <math>x_1^*</math> and fixing <math>u</math> to <math>u^*</math>. This method reflects the wait-and-see nature of the decision <math>x_2</math> - ADR is used to find a pretty-good <math>x_1</math>, then <math>u</math> is revealed, then the information is used to solve for the optimal <math>x_2</math> in that circumstance.<ref>Ben-Tal, A., Golany, B., Nemirovski, A., Vial, J. (2005). Retailer-Supplier Flexible Commitments Contracts: A Robust Optimization Approach. <i>Manufacturing and Service Operations Management</i> 7(3):248-271.</ref> This iterative, stage-by-stage solution performs better than using only ADR, but is feasible only when there is enough time between stages to re-solve the problem. Further, numerical experiments indicate that classical robust optimization models yield equally good, if not better initial decisions than ADR on L2ARO, limiting ADR on L2ARO to situations where the problem cannot be feasibly re-solved, or in the special cases where the ADR approximation actually generates the optimal solution.<ref>Gorissen, B., Yanikognlu, I., den Hertog, D. (2015). A Practical Guide to Robust Optimization. <i>Omega</i> 53:124-137.</ref><br />
<br />
This leads to the natural question, under what conditions are ADRs optimal? Bertsimias and Goyal showed in 2012 that if both <math>A(u)</math> matrices are independent of <math>u</math>, <math>x_1</math> and <math>x_2</math> are restricted to vectors with nonnegative entries, and <math>b(u)</math> is restricted to be vectors with nonpositive entries, then ADRs are optimal if <math>b(u)</math> is restricted to a polyhedral set with a number of vertices one more than <math>b(u)</math>'s dimension.<ref>Bertsimas, D., Goyal, V. On the power and limitations of affine policies in two-stage adaptive optimization. <i>Math. Program.</i> 134, 491–531 (2012).</ref> In a 2016 paper, Ben-Tal and colleagues noted that whenever the <math>A(u)</math> matrices are independent of <math>u</math>, then a <i>piecewise</i> ADR can be optimal, albeit one with a large number of pieces.<ref>Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. <i>Math. Program.</i> 182, 57–102 (2020).</ref> ADRs can be optimal in other, more specific cases, but these cases will not be discussed here.<ref>Iancu, D.A., Parrilo, P.A.(2010). Optimality of Affine Policies in Multistage Robust Optimization. <i>Mathematics of Operations Research</i> 35(2):363-394</ref><ref>Dan A. Iancu, Mayank Sharma, Maxim Sviridenko (2013) Supermodularity and Affine Policies in Dynamic Robust Optimization. <i>Operations Research</i> 61(4):941-956</ref><br />
<br />
In most cases, however, ADRs are suboptimal, and it becomes useful to characterize its degree of suboptimality. The most common approach is to generate upper and lower bounds on the optimal value of the objective function. If the goal is to minimize the objective function, then any valid solution (via ADRs or some other method) gives an upper bound, so the problem reduces to computing lower bounds. A simple approach to doing so is to approximate the uncertainty set using a small number of well-chosen points (“sampling” the uncertainty set), solve the model at each of these points, and find the worst case among these sampled solutions. Since the true worst case scenario must be at least as bad as one of the selected points, this sampling approach must generate a solution no worse than the true optimal, or a lower bound to the objective.<ref>M. J. Hadjiyiannis, P. J. Goulart and D. Kuhn, "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization," 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, 2011, pp. 7386-7391.</ref> This method, although simple, generates excessively optimistic lower bounds unless a large number of points are sampled, but solving the model at many such points can take a long time. Thus, authors have investigated methods for choosing fewer points that can better represent the whole uncertainty set to improve both the lower bound quality and computation time for this method.<ref>Ayoub, J., Poss, M. Decomposition for adjustable robust linear optimization subject to uncertainty polytope. <i>Comput Manag Sci</i> 13, 219–239 (2016).</ref> For example, Bertsimias and De Ruiter discovered that constructing the dual and sampling the dual uncertainty set gives better bounds and faster computation time.<ref>Bertsimias, D., deRuiter, F. J. C. T. (2016). Duality in Two-Stage Adaptive Linear Optimization: Faster Computation and Stronger Bounds. <i>INFORMS Journal on Computing</i> 28(3):500-511.</ref><br />
<br />
The other important assumption in the given solution methodology is the <i>fixed recourse condition</i>, that <math>A_2(u)</math> is fixed to some matrix <math>V</math>. If this is not true, that <math>A_2(u)</math> is instead some affine function of <math>u</math>, then even under the ADR assumption, the problem is intractable.<ref>Guslitser, E. (2002). Uncertainty-Immunized Solutions in Linear Programming (Master’s Thesis, Technion-Israel Institute of Technology).</ref> However, Ben-Tal has proposed a tight approximation method for cases where the uncertainty set <math>U</math> is the intersection of ellipsoidal sets, an approximation that becomes exact if <math>U</math> itself is an ellipsoidal set.<ref>Ben-Tal, A., Goryashko, A., Guslitzer, E. et al. Adjustable robust solutions of uncertain linear programs. <i>Math. Program.</i>, Ser. A 99, 351–376 (2004).</ref><br />
<br />
==Numerical Example==<br />
Consider a simple inventory management problem over two business periods involving one product that loses all value at the end of the second period. Let the storage cost of every unused unit of product be <math>$10</math> per business period. Let the unit price of the product be <math>$40</math> be in the first business period and <math>$55</math> in the second period. Let the demand in each period be uncertain, but given the following information:<br />
#Both demands are between <math>50</math> and <math>100</math> units.<br />
#The total demand over the two periods is <math>150</math> units.<br />
The problem is that the manager must decide the quantity of each product to purchase at the start of each business period minimizing storage and purchasing costs. If we denote the demand in the first business period <math>d</math> (so the demand in the second period is <math>150 - d</math>) and the quantity purchased in the <math>i</math>th period <math>n_i</math>, then we can formulate this as an L2ARO as follows:<br />
<br />
<math>\begin{align} \text{min} \; &cost\\<br />
\text{s.t.} \; &cost \geq 10(n_1-d) + 10(n_1+n_2-150) + 40n_1 + 55n_2 \;\text{ for all }d\\<br />
&n_1 - d \geq 0 \;\text{ for all }d\\<br />
&n_1 + n_2 \geq 150\\<br />
&n_1 \geq 0\\<br />
&n_2 \geq 0\\<br />
&50 \leq d \leq 100<br />
\end{align}</math><br />
<br />
The uncertain parameter is <math>d</math>, and the uncertainty set is the closed interval from <math>50</math> to <math>100</math>. The first stage decision is for <math>n_1</math> and <math>cost</math>; the second stage decision is <math>n_2</math>. Rewriting <math>n_2</math> as a function of <math>d</math> and rearranging into matrix form:<br />
<br />
<math>\text{min}(n_1, cost, n_2(d)) \; cost</math><br />
<br />
<math>\text{s.t.}\;\begin{bmatrix} -60 & 1 \\ 1 & 0 \\ 1 & 0 \\ 1 & 0 \\ 0 & 1 <br />
\end{bmatrix}\begin{bmatrix}n_1 \\ cost\end{bmatrix} + <br />
\begin{bmatrix} -65 \\ 0 \\ 1 \\ 0 \\ 1\end{bmatrix}n_2(u) \geq<br />
\begin{bmatrix} -1500 \\ 0 \\ 150 \\ 0 \\ 0\end{bmatrix} + <br />
\begin{bmatrix} -10 \\ 1 \\ 0 \\ 0 \\ 0\end{bmatrix}d\;\text{ for all }d<br />
</math><br />
<br />
Applying the affine decision rule <math>n_2(d) = w + Wd</math>, noting that <math>w</math> and <math>W</math> are <math>1\times1</math> matrices, gives:<br />
<br />
<math>\text{min}(n_1, cost, w, W) \; cost</math><br />
<br />
<math>\text{s.t.}\;\begin{bmatrix} -60 & 1 \\ 1 & 0 \\ 1 & 0 \\ 1 & 0 \\ 0 & 1 <br />
\end{bmatrix}\begin{bmatrix}n_1 \\ cost\end{bmatrix} + <br />
\begin{bmatrix} -65 \\ 0 \\ 1 \\ 0 \\ 1\end{bmatrix}(w + Wd) \geq<br />
\begin{bmatrix} -1500 \\ 0 \\ 150 \\ 0 \\ 0\end{bmatrix} + <br />
\begin{bmatrix} -10 \\ 1 \\ 0 \\ 0 \\ 0\end{bmatrix}d\;\text{ for all }d<br />
</math><br />
<br />
Which rearranges to:<br />
<br />
<math>\text{min}(n_1, cost, w, W) \; cost</math><br />
<br />
<math>\text{s.t.}\;<br />
\begin{bmatrix} -60 & 1 & -65 & -65d \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & d \\ <br />
1 & 0 & 0 & 0 \\ 0 & 0 & 1 & d\end{bmatrix}<br />
\begin{bmatrix}n_1 \\ cost \\ w \\ W\end{bmatrix} \geq<br />
\begin{bmatrix}-1500 \\ 0 \\ 150 \\ 0 \\ 0\end{bmatrix} +<br />
\begin{bmatrix}-10 \\ 1 \\ 0 \\ 0 \\ 0\end{bmatrix}d\;<br />
\text{ for all }d<br />
</math><br />
<br />
Which is a robust linear program. Since the constraints are linear inequalities in <math>d</math> and <math>d</math> is bounded between <math>50</math> and <math>100</math>, it suffices to check the constraint only for <math>d = 50</math> and <math>d = 100</math>. Writing down the constraints for both values gives a deterministic linear program:<br />
<br />
<math>\text{min}(n_1, cost, w, W) \; cost</math><br />
<br />
<math>\text{s.t.}\;<br />
\begin{bmatrix} -60 & 1 & -65 & -3250 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 50 \\ <br />
1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 50 \\ -60 & 1 & -65 & -6500 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 100 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 100\end{bmatrix}<br />
\begin{bmatrix}n_1 \\ cost \\ w \\ W\end{bmatrix} \geq<br />
\begin{bmatrix}-2000 \\ 50 \\ 150 \\ 0 \\ 0 \\ -2500 \\ 100 \\ 150 \\ 0 \\ 0\end{bmatrix}<br />
</math><br />
<br />
Which can be solved using the Simplex Algorithm. The solution to this linear program is <math>(n_1, cost, w, W) = (150, 7000, 0, 0)</math>, corresponding to a worst-case cost of <math>7000</math>. The solution corresponds to buying all 150 demand units for the two periods at start of the first period, but only having a demand of 50 units in the first business period. This solution makes intuitive sense because the purchase price for the second period is <math>$15</math> more than for the first period, but storing any extra units from the first period costs <math>$10</math>, so in any case, the price increase outweighs the storage cost. This is indeed the optimal solution to the problem.<br />
<br />
Note that the ADR approximation found the optimal solution. This is not surprising because the optimal strategy as described above does not depend on the first period demand.<br />
<br />
==Applications==<br />
Applications of adaptive robust optimization typically involve multi-stage allocation of resources under uncertain supply or demand, including problems in energy systems, inventory management, and shift scheduling.<ref>B. Hu and L. Wu, "Robust SCUC Considering Continuous/Discrete Uncertainties and Quick-Start Units: A Two-Stage Robust Optimization With Mixed-Integer Recourse," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 2, pp. 1407-1419, March 2016, doi: 10.1109/TPWRS.2015.2418158.</ref><ref>J. Warrington, C. Hohl, P. J. Goulart and M. Morari, "Rolling Unit Commitment and Dispatch With Multi-Stage Recourse Policies for Heterogeneous Devices," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 1, pp. 187-197, Jan. 2016, doi: 10.1109/TPWRS.2015.2391233.</ref><ref>Chuen-Teck See, Melvyn Sim, (2010) Robust Approximation to Multiperiod Inventory Management. <i>Operations Research</i> 58(3):583-594.</ref><ref>Marcus Ang, Yun Fong Lim, Melvyn Sim, (2012) Robust Storage Assignment in Unit-Load Warehouses. <i>Management Science</i> 58(11):2114-2130.</ref><ref>Mattia, S., Rossi, F., Servilio, M., Smriglio, S. (2017). Staffing and Scheduling Flexible Call Centers by Two-Stage Robust Optimization. <i>Omega</i> 72:25-37.</ref>.<br />
<br />
===Energy Systems===<br />
Energy systems aim to meet energy demand while minimizing costs. An energy system may involve multiple units that can each be turned on or off, with corresponding startup, shut down, and operation costs. A coal plant may be expensive to start up and shut down, but be cheap to operate, for example, while a solar farm may be easier to start up and shut down but more difficult to maintain. Let each day be partitioned into <math>n</math> different blocks of time, and suppose the problem was to determine the optimal combination of units to run during each time block on a given day. The unit combination for the first block of time must be decided immediately, but decisions for the subsequent time blocks can wait until after learning the energy demand in preceding time blocks. However, past decisions influence future decisions - starting up the coal plant in the first time block allows it to produce cheap energy all day, potentially reducing a reliance on other energy sources. Such a decision structure, where past decisions and new information guide future decisions, lends itself naturally to an ARO model. The decision is the combination of units to run in each time block, the uncertainty set <math>U</math> is the set of possible energy demands for each time block, the constraint is that the power produced meets demand for each time block, and the objective would be to minimize the total startup, shut down, and operation costs for the day. For more detailed treatments of ARO applied to energy systems, we refer the reader to the references.<ref>B. Hu and L. Wu, "Robust SCUC Considering Continuous/Discrete Uncertainties and Quick-Start Units: A Two-Stage Robust Optimization With Mixed-Integer Recourse," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 2, pp. 1407-1419, March 2016, doi: 10.1109/TPWRS.2015.2418158.</ref><ref>J. Warrington, C. Hohl, P. J. Goulart and M. Morari, "Rolling Unit Commitment and Dispatch With Multi-Stage Recourse Policies for Heterogeneous Devices," in <i>IEEE Transactions on Power Systems</i>, vol. 31, no. 1, pp. 187-197, Jan. 2016, doi: 10.1109/TPWRS.2015.2391233.</ref><br />
<br />
===Inventory Management===<br />
The inventory management problem seeks to purchase goods at regular intervals and store them such that there’s always enough product on hand to satisfy demand while minimizing purchase and storage costs. Purchasing large amounts when the prices are low saves on purchasing costs later on, but this incurs large storage costs. On the other extreme, keeping too little inventory risks running out of stock or requiring large purchases at inconvenient times. If an inventory planner wanted to plan purchases for the next <math>n</math> time blocks (for simplicity assuming purchases take place only at the start of each time block), then he must immediately decide how much to purchase for the first time block, then use each past time block’s prices and demands to decide how much to buy at future time blocks. As in the case of energy systems, the inventory management problem has a staggered decision structure where past decisions and new information inform future decisions, and can be translated naturally into an ARO model. The decisions are the quantities of product to purchase at the start of each time block, the uncertainty set is the set of possible prices and demands for each of the <math>n</math> time blocks, the constraint is that the inventory has enough stock to meet demand in every time block, and the objective is to minimize purchasing and storage costs. For more detailed analyses of the inventory management problem, we refer again readers to the references.<ref>Chuen-Teck See, Melvyn Sim, (2010) Robust Approximation to Multiperiod Inventory Management. <i>Operations Research</i> 58(3):583-594.</ref><ref>Marcus Ang, Yun Fong Lim, Melvyn Sim, (2012) Robust Storage Assignment in Unit-Load Warehouses. <i>Management Science</i> 58(11):2114-2130.</ref><br />
<br />
===Shift Scheduling===<br />
The shift scheduling problem involves carefully choosing a shift for each employee such that the operation center has enough staff at all times. For example, a customer service line would ideally like to predict the frequency and length of calls at every hour of the day so it employs exactly enough operators to handle the calls, however, call volume is hard to predict and call centers often end up overstaffed or understaffed, so that the company may spend excess on paying workers or deliver unsatisfactory customer service. However, Mattia and colleagues note that consecutive periods of high volume calls are not independent, so that past call volumes help predict future call volumes, so they formulated the shift scheduling problem as a two-stage ARO. In the first stage (over the weekend), employees shifts are laid out for the workweek; in the second stage (during the week), employees are allocated to different jobs in the office. The uncertainty set is the set of possible call volume distributions through the week (represented as the deviations in the number of staff for handling all other jobs), the constraint is that enough staff is present to handle the various office jobs, and the objective is to minimize the total cost of hiring the employees for the hours they work. For more detail on this problem, we refer readers to the paper by Mattia and colleagues.<ref>Mattia, S., Rossi, F., Servilio, M., Smriglio, S. (2017). Staffing and Scheduling Flexible Call Centers by Two-Stage Robust Optimization. <i>Omega</i> 72:25-37.</ref><br />
<br />
==Conclusion==<br />
Adaptive robust optimization models multi-stage decision making where past decisions affect future decision making, but new information is learned between decision stages. It finds less conservative solutions than traditional robust optimization without sacrificing robustness, at the expense of simplicity and computation time. Many ARO problems are computationally intractable, but ARO problems have also been solved for many specific problems in the field, and will continue to grow in the coming decades.<br />
<br />
== References ==<br />
<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Convex_generalized_disjunctive_programming_(GDP)&diff=2739Convex generalized disjunctive programming (GDP)2020-12-21T11:40:30Z<p>Wc593: </p>
<hr />
<div>Authors: Nicholas Schafhauser, Blerand Qeriqi, Ryan Cuppernull (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
Generalized disjunctive programming (GDP) involves logic propositions (Boolean variables) and sets of constraints that are chained together using the logical OR operator ( II ). GDP is an extension of linear disjunctive programming<ref>Balas, Egon. "Disjunctive Programming." Annals of Discrete Mathematics, 1979.</ref> that can be applied to Mixed Integer Non-Linear Programming (MINLP). GDP<ref>Raman and Grossman. "Modelling and Computational Techniques for Logic Based Integer Programming." Computers & Chemical Engineering, 1994.</ref>, is a generalization of disjunctive convex programming in the sense that it also allows the use of logic propositions that are expressed in terms of Boolean variables. In order to take advantage of current mixed-integer nonlinear programming solvers (e.g. DICOPT<ref name=":3">GAMS. DICOPT, https://www.gams.com/latest/docs/S_DICOPT.html</ref>, SBB<ref name=":4" />, α-ECP<ref name=":5">GAMS. AlphaECP, 1995, https://www.gams.com/latest/docs/S_ALPHAECP.html</ref>, BARON<ref name=":6">BARON, 1996, https://minlp.com/baron</ref>, Couenne<ref name=":7">Couenne, 2006, https://projects.coin-or.org/Couenne</ref> etc.), GDPs are often reformulated as MINLPs.<ref name=":0">P. Ruiz, Juan; Grossmann, Ignacio E. (2012): A hierarchy of relaxations for nonlinear convex generalized disjunctive programming. Carnegie Mellon University. Journal contribution. <nowiki>https://doi.org/10.1184/R1/6466535.v1</nowiki> </ref><br />
[[File:GDP Intro.jpg|none|thumb|523x523px|Figure 1: Generalized Disjunctive Programming Methods<ref>Grossman, Ignacio E: Overview of Generalized Disjunctive Programming. Carnegie Mellon University.https://www.minlp.org/pdf/GBDEWOGrossmann.pdf</ref>]]<br />
<br />
== Theory ==<br />
The general form of an MINLP model is as follows<br />
<br />
<math>\begin{align} \min z=f(x,y)\\<br />
<br />
s.t.g(x,y) \leq 0\\<br />
x \in X\\<br />
y \in Y\\ <br />
<br />
\end{align}</math><br />
<br />
where f(x) and g(x) are twice differentiable functions, x are the continuous variables and y are the discrete variables. There are three main types of sub problems that arise from the MINLP: Continuous Relaxation, NLP subproblem for a fix <br />
<math>\begin{align}<br />
Y_p<br />
\end{align}</math> <br />
and the feasibility problem.<br />
<br />
==== Continuous Relaxation ====<br />
The sub problem of continuous relaxation takes the form of<br />
<br />
<math>\begin{align} \min z=f(x,y)\\<br />
<br />
s.t.g(x,y) \leq 0\\<br />
x \in X\\<br />
y \in Y_R\\ <br />
<br />
\end{align}</math><br />
<br />
Where <math>Y_R</math> is the continuous relaxation of Y. Not that in this sub-problem all of the integer variables y are treated as continuous. This also returns a Lower Bound when it returns a feasible solution<ref name=":2">Grossmann, Ignacio. Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Applications in Process Systems Engineering.</ref><br />
<br />
==== NLP Subproblem for a fixed <math>Y_p</math> ====<br />
The subproblem for a fixed <math>Y_p</math> is shown in the form below<br />
<br />
<math>\begin{align} \min z=f(x,y^p)\\<br />
<br />
s.t. g(x,y^p) \leq 0\\<br />
x \in \Re^n\\<br />
<br />
\end{align}</math><br />
<br />
In this sub problem you return an upper bound for the MINLP program when it has a feasible solution. So with that being said you can fix a integer variables and continuously relax the others in order to get a range of feasible values.<ref name=":2" /><br />
<br />
'''Feasibility Problem'''<br />
<br />
When the fixed MINLP subproblem is not feasible the following feasibility problem is considered.<br />
<br />
<math>\begin{align} \min z=f(x,y)\\<br />
<br />
s.t.g(x,y) \leq 0\\<br />
j \in J\\<br />
u \in \Re\\ <br />
<br />
\end{align}</math><br />
<br />
Where J is the index set for inequalities and the feasibility problem attempts to minimize the infeasibility of the solution with the most violated constraints.<ref name=":2" /><br />
<br />
==== GDP ====<br />
GDP provides a high level framework for solving the mixed non-linear integer programs. By provide a methodology for converting the disjunctive problems into a MINLP the problem becomes simplified and easier to solve using current processing and algorithmic capabilities. These methodologies that can not only solve both the Convex and Non-Convex Problems. A Convex GDP is when both f(x) and g(x) are convex functions. Which is defined as a graph where any line segment that passes through any 2 points of the plot will always be greater than the plot itself. This allows for simple relaxations/approximations to occur which will create a faster solving methodology.<ref>Grossmann, Ignacio. Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Applications in Process Systems Engineering.</ref><br />
<br />
== Methodology ==<br />
<br />
Below is a GDP problem that will be used for demonstration purposes in this section. <br />
<br />
<math>\begin{align} \min z=f(x)\\<br />
s.t. g(x) \leq 0\\<br />
\bigvee_ {i \in D_k} \begin{bmatrix} Y_{ki} \\<br />
r_{ki}(x) \leq 0 <br />
\end{bmatrix} \quad k \in K \\<br />
\underline{\bigvee}_ {i \in D_k} Y_{ki} \quad k \in K\\<br />
\Omega(Y)=True\\<br />
x^{lo} \leq x \leq x^{up}\\<br />
x \in \Re^n\\<br />
y_{ki} \in {True,False}<br />
\quad k \in K, i \in D_k \end{align}</math><br />
<br />
<br />
The two most common ways of reformulating a GDP problem into an MINLP are through Big-M (BM) and Hull Reformulation (HR). BM is the simpler of the two, while HR results in tighter relaxation (smaller feasible region) and faster solution times.<ref>Trespalacios, Francisco; Grossmann, Ignacio E. (2018): Improved Big-M Reformulation for Generalized Disjunctive Programs. Carnegie Mellon University. Journal contribution. <nowiki>https://doi.org/10.1184/R1/6467063.v1</nowiki> </ref><br />
<br />
Below is an example of the the GDP problem from above reformulated into an MINLP by using the BM method.<br />
<br />
<math>\begin{align} \min z=f(x)\\<br />
<br />
s.t.g(x) \leq 0\\<br />
r_{ki}(x) \leq M^{ki}(1-y_{ki})\quad k \in K,i \in D_k\\ <br />
<br />
\sum_{i \in D_k} y_{ki} = 1\quad k \in K\\<br />
Hy \geq h\\<br />
x^{lo} \leq x \leq x^{up}\\<br />
x \in \Re^n\\<br />
<br />
y_{ki} \in {0,1} \quad k \in K, i \in D_k \end{align}</math><br />
<br />
<br />
<br />
Notice that the boolean term from the original GDP has been converted into a numerical {0,1}. The logic relations have also been converted into linear integer constraints (Hy)<ref name=":0" />.<br />
<br />
This MINLP reformulation can now be used in well-known solvers to calculate a solution. <br />
<br />
The same GDP form will now be reformulated into an MINLP by using the HR method. <br />
<br />
<math>\begin{align} \min z=f(x)\\<br />
s.t. g(x) \leq 0\\<br />
x = \sum_{i \in D_k} v^{ki}\quad k \in K\\<br />
y_{ki}r_{ki}(v^{ki}/y_{ki}) \leq 0\quad k \in K, i \in D_k\\<br />
\sum_{i \in D_k} y_{ki} = 1\quad k \in K\\<br />
Hy \geq h\\<br />
x^{lo}y_{ki} \leq v^{ki} \leq x^{up}y_{ki}\quad k \in K, i \in D_k\\<br />
x \in \Re^n\\<br />
y_{ki} \in {0,1} \quad k \in K, i \in D_k\\<br />
\end{align}</math> <br />
<br />
HR significantly increases the number of variables that are required in the same BM variant. The decrease in time needed to solve computations could very well be argued to be worth the reduced simplicity that one can get from BM.<ref>Trespalacios, Francisco; Grossmann, Ignacio E. (2015): Algorithmic Approach for Improved Mixed-Integer Reformulations of Convex Generalized Disjunctive Programs. Carnegie Mellon University. Journal contribution. <nowiki>https://doi.org/10.1184/R1/6466700.v1</nowiki> </ref><br />
<br />
==== Solvers: ====<br />
<br />
* DICOPT<ref name=":3" /><br />
* SBB<ref name=":4">GAMS. ''SBB'', 2020, www.gams.com/latest/docs/S_SBB.html.</ref><br />
* BARON<ref name=":6" /><br />
* Couenne<ref name=":7" /><br />
<br />
== Numerical Example ==<br />
The following example was taken from the paper titled ''Generalized Disjunctive Programming: A Framework For Formulation and Alternative Algorithms For MINLP Optimization''.''<ref name=":1">P. Ruize, Juan; Grossmann, Ignacio E.: Generalized Disjunctive Programming: A Framework For Formulation And Alternative Algorithms For MINLP Optimization. Carnegie Mellon University. http://egon.cheme.cmu.edu/Papers/IMAGrossmannRuiz.pdf</ref>''<br />
<br />
[[File:GDP numeric example 3.png|frameless|600x600px]]<br />
<br />
[[File:GDP numeric example 4.png|alt=http://egon.cheme.cmu.edu/Papers/IMAGrossmannRuiz.pdf|frameless|661x661px]]<br />
<br />
[[File:GDP numeric example 5.png|alt=http://egon.cheme.cmu.edu/Papers/IMAGrossmannRuiz.pdf|frameless|600x600px]]<br />
<br />
== Applications ==<br />
GDP formulations are useful for real-world applications where multiple branches are available when making decisions. Solving the GDP in these instances will allow the user to calculate which decisions should be made at each branching point in order to get the optimal solution. This disjunctive formulation is common in complex chemical reactions and production planning.<br />
[[File:Process network example.png|none|thumb|600x600px|Figure 2: Process Network Example. Each decision point represents another disjunctive set. <ref name=":1" />]]<br />
The process network depicted in the Figure 2 depicts multiple decisions that could be made to all end up at the goal (B) in a chemical reaction. This problem is able to be formulated into a GDP in order to figure out which route should be taken in order to maximize the profit. <br />
[[File:GDP numeric example 1.png|none|thumb|600x600px|Figure 3: A more complex process network.<ref name=":1" />]]<br />
This same idea can be scaled to larger problems with more complex branching. Figure 3 illustrates a larger process network and all of the different decision points. This problem is able to be formulated into a GDP so that the most optimal route can be calculated to take through the network.<br />
== Conclusion ==<br />
GDP is a programming method that applies disjunctive programming to MINLP problems. This method facilitates modeling discrete or continuous optimization problems by implementing algebraic constraints and logic expressions. The formulation of a GDP consists of Boolean and continuous variables and disjunctions and logic propositions. In the case of convex functions, GDPs can be reformulated using the BM and the HR methods. Formulation methods also include logic based methods disjunctive branch and bound and decomposition. Once reformulated into a standard MINLP, standard MILNP solvers, such as DICOPT<ref name=":3" />, SBB<ref name=":4" />, α-ECP<ref name=":5" /> and BARON<ref name=":6" />, can be used to determine optimal solutions<ref name=":0" />. The GDP method has important applications that include the optimization of complex chemical reactions and process planning. <br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Fuzzy_programming&diff=2738Fuzzy programming2020-12-21T11:40:08Z<p>Wc593: </p>
<hr />
<div>Authors: Kyle Clark, Matt Schweider, Tommy Sheehan, Jarred Melancon (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
Fuzzy Programming is an optimization model that deals with performing optimization in the presence of uncertainty. This optimization technique is used when determining the exactness of a system's performance criteria/parameters and decision variables is not possible. Specifically, the truth values associated with the system can be completely false (0), completely true (1), or some value between the two extremes. This aims to capture the concept of partial truth. One approach to account for uncertainty in a system is to model the uncertainty using probability distributions, also known as statistical analysis. However, sometimes uncertainty is sometimes described using qualitative adjectives, or 'Fuzzy' statements, such as young or old and hot or cold, because exact boundaries do not necessarily exist [1]. <br />
<br />
Fuzzy Programming is built on the concept of Fuzzy Logic. The motivation for Fuzzy Logic, or more precisely Fuzzy Set Theory, is to accurately model and represent real world data which is often 'Fuzzy' due to uncertainty. This uncertainty can be introduced into a system by a number of factors such as imprecision in measurement tools or due to the use of vague language [2]. <br />
<br />
== Fuzzy Logic ==<br />
While Boolean Logic is used to describe situations as completely true or completely false, Fuzzy Logic allows for a mathematical representation of partial truth or partial falsehood. Rather than having strict criteria for defining what is part of the set and what is not (e.g. hot or cold, young or old), we allow data to have a degree of membership (u) to each set. A membership function defines how each input value is mapped to a degree of membership (u) between the two extremes, 0 and 1. Membership functions can be several different types of functions. However, they are often Piece-Wise Linear Functions [3]. Below is an example of an L-Function. <br />
<br />
<math>u_A(x) = \begin{cases} 0,\qquad x\leq b \\ \frac{x-a}{b-a},\quad a\leq x \leq b \\ 1,\qquad x>b \end{cases}</math> <br />
<br />
For instance, let's say that we have a set of values that describe temperatures over the course of a week. In Boolean logic, we could create two sets, a cold set and a hot set. We could say that temperatures [0°F, 60°F) belong to the cold set and temperatures [60°F, 100°F] belong to the hot set. However, it is not very accurate to say that 60°F is cold, but 60.1°F is hot. Instead, we could use Fuzzy Logic to describe temperatures 0°F - 50°F as not hot (u=0). As temperatures increase from 50°F, they are given a higher degree of membership (u > 0) to describe that they are "hotter" or warmer. Lastly, temperatures above 70°F are definitely hot (u=1) [3]. <br />
<br />
== Flexible Mathematical Programming Method ==<br />
An optimization technique used to implement Fuzzy Programming is Flexible Mathematical Programming. This kind of problem takes on the form of<br />
<br />
<math>\begin{cases} \tilde{min} f(x) \\ s.t. \ g_i(x) \leq \sim b_i; i=1,...,m \\ x \in X = \{ x \in \reals^2 | x \geq 0 \} \end{cases}</math><br />
<br />
where the "~" conveys the concept that the objective statement and constraints have some freedom in how they are satisfied. This approach is useful when strict satisfaction of the constraints creates an empty feasible set. Relaxing the constraints with the "~" allows for maneuverability within the potential solutions. <br />
<br />
An easier way to represent the constraints is through the use of membership functions which are fuzzy sets of <math>\reals</math>. <br />
<br />
<math>u_i(x) = 0 \qquad \ \ \ if \ g_i(x) > b_i + d_i</math> <br />
<br />
<math>u_i(x) \in (0,1) \quad if \ b_i < g_i(x) \leq b_i + d_i</math> <br />
<br />
<math>u_i(x) = 1 \qquad \ \ if \ g_i(x) \leq b_i</math> <br />
<br />
where <math>d_i(i = 1,...,m)</math> represents the set of constraints which have a certain threshold that can be violated. The above membership functions are used to determine the degree of membership or how violated a certain constraint is. If <math>u_i(x) = 1</math>, then the constraint is not violation. If <math>u_i(x) = 0</math>, then the constraint is violated. The in between case of <math>u_i(x) \in (0,1)</math> allows for partial violation of a constraint. The values of <math>d_i(i = 1,...,m)</math> can be carefully selected to create constraints that allow for the desired amount of flexibility. <br />
<br />
The above membership functions can be combined into a single piecewise function like the function shown within the Fuzzy Logic section of this page.<br />
<br />
<math>u_i(x) = \begin{cases} 1, \qquad \qquad \quad \ if \ g_i(x) \leq b_i \\ 1- \frac{g_i(x)-b_i}{d_i},\quad if \ b_i < g_i(x) \leq b_i + d_i \\ 0,\qquad \qquad \quad \ if \ g_i(x) > b_i + d_i \end{cases}</math><br />
<br />
The optimal solution then becomes the value of x that provides the highest degree of membership while satisfying all constraints expressed by the above fuzzy sets or <math>maximize \ u_D(x) = min \ u_i(x)</math> [4]. <br />
<br />
== Applications ==<br />
Fuzzy Programming can be applied in a number of fields including media selection in advertising, automated braking in cars, water resource management, and control systems in HVAC systems. HVAC (Heating, Ventilation, and Air Condition) Systems are used to maintain a comfortable environment with a building, such as an office or school. The system works to maintain a certain temperature/humidity according to a set schedule. The system monitors the environment closely to understand when it has reach a certain set point. Fuzzy programming is applied to the control systems to create a more cost efficient systems as these HVAC systems can often be expensive to run. This is because the system might be aiming for that specific temperature and so to reach and maintain that exact degree, it could repeatedly turn on and off either the heating or air conditioning. This can result in a lot of wasted energy as opposed to dealing with a range of temperatures, which gives the system more flexibility as to when a certain subsystem needs to turn on. Compared to the traditional PID (Proportional, Integral, Derivative) controller, the application of Fuzzy programming has been shown to be a more efficient way to run these systems [4]. <br />
<br />
== Example ==<br />
<br />
An example that showcases fuzzy logic can be described by a simple water allocation problem [1]. Suppose we have a scenario where we have 3 firms wishing to receive a certain amount of water from the flow of a river. Each firm has its own benefit from the water allocation and the amount of water allocated to all of the firms can't exceed the amount in the river or the amount of flow Q. <br />
[[File:Tps94 water allocation.png|thumb|712x712px|Water Allocation Scenario|alt=|center]]<br />
<br />
Our goal in this problem is to maximize the water allocation to three separate firms from a single source, in this case being a river. Therefore we get this optimization problem:<br />
<br />
<math display="inline"> max \ \ TB(X)=(6x_1-x_1^2)+(7x_2-1.5x_2^2)+(8x_3-0.5x_3^2)</math><br />
<br />
<math>s.t. \ x_1+x_2+x_3\leq K</math><br />
<br />
<math display="inline">x_i \geq 0 \ \ i=1,2,3 </math><br />
<br />
As we mentioned before, the total allocation of water for these three firms cannot exceed the total amount of water available, which will be represented by the variable Q. This value deducted from that total will be the amount of water that has to remain in the river, R, which gives us our value: <math>Q-R=K </math>. Using this will give us an idea of the water that can be allocated to the firms. For our case, we will assume that the value <math>K = 6</math>. Thus our new optimization function becomes:<br />
<br />
<math display="inline"> max \ \ TB(X)=(6x_1-x_1^2)+(7x_2-1.5x_2^2)+(8x_3-0.5x_3^2)</math><br />
<br />
<math display="inline">s.t. \ x_1+x_2+x_3\leq 6</math>. <br />
<br />
<math display="inline">x_i \geq 0 \ \ i=1,2,3 </math><br />
<br />
With that constraint, the optimal solution will be <math>x_1=1,x_2=1, x_3=4, </math> giving a value of <math>TB(X)=34.5 </math>. <br />
<br />
The problem depicted above was an example of a crisp problem where we knew the exact value for the limit of water to allocate. However, in the real world, we don't always have exact values; therefore, we can apply fuzzy logic to make the problem more realistic. <br />
<br />
A fuzzy variant of this model would be when each firm's benefits are maximized. The first step for this fuzzy variant would be adding a new factor involving the membership function for each of the firms. The member function can be summed up into the equation below:<br />
<br />
<math>m(X)= [(6x_1 - x_1^2) +(7x_2- 1.5x_2^2)+ (8x_3- 0.5x_3^2)]/ 49.17</math><br />
<br />
This has a similar constraint to the linear version in regards to the total water, <math display="inline">x_1+x_2+x_3\leq 6</math>. The optimal solution of this function is thus the same as the linear variant and the degree of satisfaction is <math>m(X)=0.7</math>. However, things begin to change when the total amount of units of water becomes '''more or less 6 units''' instead of just a crisp 6. The nomenclature "more or less 6 " is where we start to apply the fuzzy logic implying that the value will be around 6. Therefore we can classify the possibilities into membership functions around the values (5, 6, 7) adjusting the membership value between 0 and 1.<br />
<br />
Adjusting the membership with these values yields the membership function:<br />
<br />
<math>m_c(x) = \begin{cases} 1, \qquad \qquad \quad \ if \ x_{1}+x_{2}+x_{3}\leq 5\\ \frac{7-(x_{1}+x_{2}+x_{3})}{2},\quad\ if \ 5 < x_{1}+x_{2}+x_{3}\leq7 \\ 0,\qquad \qquad \quad \ if \ x_{1}+x_{2}+x_{3} > 7 \end{cases}</math><br />
<br />
Thus the overall optimization problem changes to a maximum/minimum dilemma where we are maximizing <math>M_G(X)</math> and minimizing <math>M_C(X)</math>:<br />
<br />
<math>m_G(X)=[(6x_1-x_1^2)+(7x_2-1.5x_2^2)+(8x_3-0.5x_3^2)]/49.17<br />
</math><br />
<br />
<math>m_C(X)=[7-(x_1+x_2+x_3)]/2</math><br />
<br />
This results in <math>x_1=0.91, x_2=0.94, x_3=3.81, m(X)=0.67</math>, and the total benefit being <math>TB(X)=33.1</math>.<br />
<br />
This result accounts for some uncertainty in our assumptions which is a common issue in a lot of real world problems which is why fuzzy programming is used in a lot of real world applications and controllers for various systems.<br />
<br />
== Conclusion ==<br />
The optimization technique of Fuzzy Programming is useful when qualitative adjectives are the only available descriptors for a system's performance criteria/parameters and decision variables. It's a vital tool that can help characterize and solve optimization models in the presence of uncertainty which are very common in the real world. The premise of Fuzzy Programming centers around Fuzzy Logic which allows for a mathematical representation of partial truth or partial falsehood rather than strict Boolean Logic system. Incorporating this partial state introduces a flexibility or "fuzziness" to the problem to allow it to better interpret imprecisions and unknowns we encounter in the real world data. Instead of fixed categories, we define degrees of membership to our function where we apply certain ranges or criteria which relate to different membership functions within our Fuzzy Set. Therefore, instead of the basic black and white scenario, we also consider the gray area between the two sets. Strict sets and precise measurements are nearly impossible to find in the real world so the use of Fuzzy Programming is essential to get optimal solutions that can accurately relate to real world situations. The versatility that it provides is precisely why it is so widely used in numerous controllers in all industries from HVAC systems to automated breaking systems. <br />
<br />
== References ==<br />
[1] Daniel P. Loucks, Eelco van Beek, Jery R. Stedinger, Jozef P.M. Dijkman, Monique T. Villars, [https://ecommons.cornell.edu/bitstream/handle/1813/2804/05_chapter05.pdf?sequence=16&isAllowed=y ''Water Resources Systems Planning and Management: An Introduction to Methods, Models, and Applications''], UNESCO, p.135-142, 2005. <br />
<br />
[2] Nitin A. Bansod, Vaishali Kulkerni and S.H. Paul, [https://books.google.com/books?id=IkajJC9iGxMC&pg=PA73#v=onepage&q&f=false ''Soft Computing-A Fuzzy Logic Approach''], Bharati Vidyapeeth College of Engineering, p.73-74, 2005. <br />
<br />
[3] MathWorks, (2020). ''Foundations of Fuzzy Logic'', Retrieved November 6th, 2020 from https://www.mathworks.com/help/fuzzy/foundations-of-fuzzy-logic.html#:~:text=northern%20hemisphere%20climates).-,Membership%20Functions,name%20for%20a%20simple%20concept <br />
<br />
[4] M.K. Luhandjula, [http://www.worldacademicunion.com/journal/jus/jusVol01No2paper03.pdf ''Fuzzy Mathematical Programming: Theory, Applications, and Extension''], University of South Africa Department of Decision Sciences, Journal of Uncertain Systems, Vol.1, No.2, p.124-136, 2007.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Convex_generalized_disjunctive_programming_(GDP)&diff=2737Convex generalized disjunctive programming (GDP)2020-12-21T11:39:42Z<p>Wc593: </p>
<hr />
<div>Author: Nicholas Schafhauser, Blerand Qeriqi, Ryan Cuppernull (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
Generalized disjunctive programming (GDP) involves logic propositions (Boolean variables) and sets of constraints that are chained together using the logical OR operator ( II ). GDP is an extension of linear disjunctive programming<ref>Balas, Egon. "Disjunctive Programming." Annals of Discrete Mathematics, 1979.</ref> that can be applied to Mixed Integer Non-Linear Programming (MINLP). GDP<ref>Raman and Grossman. "Modelling and Computational Techniques for Logic Based Integer Programming." Computers & Chemical Engineering, 1994.</ref>, is a generalization of disjunctive convex programming in the sense that it also allows the use of logic propositions that are expressed in terms of Boolean variables. In order to take advantage of current mixed-integer nonlinear programming solvers (e.g. DICOPT<ref name=":3">GAMS. DICOPT, https://www.gams.com/latest/docs/S_DICOPT.html</ref>, SBB<ref name=":4" />, α-ECP<ref name=":5">GAMS. AlphaECP, 1995, https://www.gams.com/latest/docs/S_ALPHAECP.html</ref>, BARON<ref name=":6">BARON, 1996, https://minlp.com/baron</ref>, Couenne<ref name=":7">Couenne, 2006, https://projects.coin-or.org/Couenne</ref> etc.), GDPs are often reformulated as MINLPs.<ref name=":0">P. Ruiz, Juan; Grossmann, Ignacio E. (2012): A hierarchy of relaxations for nonlinear convex generalized disjunctive programming. Carnegie Mellon University. Journal contribution. <nowiki>https://doi.org/10.1184/R1/6466535.v1</nowiki> </ref><br />
[[File:GDP Intro.jpg|none|thumb|523x523px|Figure 1: Generalized Disjunctive Programming Methods<ref>Grossman, Ignacio E: Overview of Generalized Disjunctive Programming. Carnegie Mellon University.https://www.minlp.org/pdf/GBDEWOGrossmann.pdf</ref>]]<br />
<br />
== Theory ==<br />
The general form of an MINLP model is as follows<br />
<br />
<math>\begin{align} \min z=f(x,y)\\<br />
<br />
s.t.g(x,y) \leq 0\\<br />
x \in X\\<br />
y \in Y\\ <br />
<br />
\end{align}</math><br />
<br />
where f(x) and g(x) are twice differentiable functions, x are the continuous variables and y are the discrete variables. There are three main types of sub problems that arise from the MINLP: Continuous Relaxation, NLP subproblem for a fix <br />
<math>\begin{align}<br />
Y_p<br />
\end{align}</math> <br />
and the feasibility problem.<br />
<br />
==== Continuous Relaxation ====<br />
The sub problem of continuous relaxation takes the form of<br />
<br />
<math>\begin{align} \min z=f(x,y)\\<br />
<br />
s.t.g(x,y) \leq 0\\<br />
x \in X\\<br />
y \in Y_R\\ <br />
<br />
\end{align}</math><br />
<br />
Where <math>Y_R</math> is the continuous relaxation of Y. Not that in this sub-problem all of the integer variables y are treated as continuous. This also returns a Lower Bound when it returns a feasible solution<ref name=":2">Grossmann, Ignacio. Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Applications in Process Systems Engineering.</ref><br />
<br />
==== NLP Subproblem for a fixed <math>Y_p</math> ====<br />
The subproblem for a fixed <math>Y_p</math> is shown in the form below<br />
<br />
<math>\begin{align} \min z=f(x,y^p)\\<br />
<br />
s.t. g(x,y^p) \leq 0\\<br />
x \in \Re^n\\<br />
<br />
\end{align}</math><br />
<br />
In this sub problem you return an upper bound for the MINLP program when it has a feasible solution. So with that being said you can fix a integer variables and continuously relax the others in order to get a range of feasible values.<ref name=":2" /><br />
<br />
'''Feasibility Problem'''<br />
<br />
When the fixed MINLP subproblem is not feasible the following feasibility problem is considered.<br />
<br />
<math>\begin{align} \min z=f(x,y)\\<br />
<br />
s.t.g(x,y) \leq 0\\<br />
j \in J\\<br />
u \in \Re\\ <br />
<br />
\end{align}</math><br />
<br />
Where J is the index set for inequalities and the feasibility problem attempts to minimize the infeasibility of the solution with the most violated constraints.<ref name=":2" /><br />
<br />
==== GDP ====<br />
GDP provides a high level framework for solving the mixed non-linear integer programs. By provide a methodology for converting the disjunctive problems into a MINLP the problem becomes simplified and easier to solve using current processing and algorithmic capabilities. These methodologies that can not only solve both the Convex and Non-Convex Problems. A Convex GDP is when both f(x) and g(x) are convex functions. Which is defined as a graph where any line segment that passes through any 2 points of the plot will always be greater than the plot itself. This allows for simple relaxations/approximations to occur which will create a faster solving methodology.<ref>Grossmann, Ignacio. Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Applications in Process Systems Engineering.</ref><br />
<br />
== Methodology ==<br />
<br />
Below is a GDP problem that will be used for demonstration purposes in this section. <br />
<br />
<math>\begin{align} \min z=f(x)\\<br />
s.t. g(x) \leq 0\\<br />
\bigvee_ {i \in D_k} \begin{bmatrix} Y_{ki} \\<br />
r_{ki}(x) \leq 0 <br />
\end{bmatrix} \quad k \in K \\<br />
\underline{\bigvee}_ {i \in D_k} Y_{ki} \quad k \in K\\<br />
\Omega(Y)=True\\<br />
x^{lo} \leq x \leq x^{up}\\<br />
x \in \Re^n\\<br />
y_{ki} \in {True,False}<br />
\quad k \in K, i \in D_k \end{align}</math><br />
<br />
<br />
The two most common ways of reformulating a GDP problem into an MINLP are through Big-M (BM) and Hull Reformulation (HR). BM is the simpler of the two, while HR results in tighter relaxation (smaller feasible region) and faster solution times.<ref>Trespalacios, Francisco; Grossmann, Ignacio E. (2018): Improved Big-M Reformulation for Generalized Disjunctive Programs. Carnegie Mellon University. Journal contribution. <nowiki>https://doi.org/10.1184/R1/6467063.v1</nowiki> </ref><br />
<br />
Below is an example of the the GDP problem from above reformulated into an MINLP by using the BM method.<br />
<br />
<math>\begin{align} \min z=f(x)\\<br />
<br />
s.t.g(x) \leq 0\\<br />
r_{ki}(x) \leq M^{ki}(1-y_{ki})\quad k \in K,i \in D_k\\ <br />
<br />
\sum_{i \in D_k} y_{ki} = 1\quad k \in K\\<br />
Hy \geq h\\<br />
x^{lo} \leq x \leq x^{up}\\<br />
x \in \Re^n\\<br />
<br />
y_{ki} \in {0,1} \quad k \in K, i \in D_k \end{align}</math><br />
<br />
<br />
<br />
Notice that the boolean term from the original GDP has been converted into a numerical {0,1}. The logic relations have also been converted into linear integer constraints (Hy)<ref name=":0" />.<br />
<br />
This MINLP reformulation can now be used in well-known solvers to calculate a solution. <br />
<br />
The same GDP form will now be reformulated into an MINLP by using the HR method. <br />
<br />
<math>\begin{align} \min z=f(x)\\<br />
s.t. g(x) \leq 0\\<br />
x = \sum_{i \in D_k} v^{ki}\quad k \in K\\<br />
y_{ki}r_{ki}(v^{ki}/y_{ki}) \leq 0\quad k \in K, i \in D_k\\<br />
\sum_{i \in D_k} y_{ki} = 1\quad k \in K\\<br />
Hy \geq h\\<br />
x^{lo}y_{ki} \leq v^{ki} \leq x^{up}y_{ki}\quad k \in K, i \in D_k\\<br />
x \in \Re^n\\<br />
y_{ki} \in {0,1} \quad k \in K, i \in D_k\\<br />
\end{align}</math> <br />
<br />
HR significantly increases the number of variables that are required in the same BM variant. The decrease in time needed to solve computations could very well be argued to be worth the reduced simplicity that one can get from BM.<ref>Trespalacios, Francisco; Grossmann, Ignacio E. (2015): Algorithmic Approach for Improved Mixed-Integer Reformulations of Convex Generalized Disjunctive Programs. Carnegie Mellon University. Journal contribution. <nowiki>https://doi.org/10.1184/R1/6466700.v1</nowiki> </ref><br />
<br />
==== Solvers: ====<br />
<br />
* DICOPT<ref name=":3" /><br />
* SBB<ref name=":4">GAMS. ''SBB'', 2020, www.gams.com/latest/docs/S_SBB.html.</ref><br />
* BARON<ref name=":6" /><br />
* Couenne<ref name=":7" /><br />
<br />
== Numerical Example ==<br />
The following example was taken from the paper titled ''Generalized Disjunctive Programming: A Framework For Formulation and Alternative Algorithms For MINLP Optimization''.''<ref name=":1">P. Ruize, Juan; Grossmann, Ignacio E.: Generalized Disjunctive Programming: A Framework For Formulation And Alternative Algorithms For MINLP Optimization. Carnegie Mellon University. http://egon.cheme.cmu.edu/Papers/IMAGrossmannRuiz.pdf</ref>''<br />
<br />
[[File:GDP numeric example 3.png|frameless|600x600px]]<br />
<br />
[[File:GDP numeric example 4.png|alt=http://egon.cheme.cmu.edu/Papers/IMAGrossmannRuiz.pdf|frameless|661x661px]]<br />
<br />
[[File:GDP numeric example 5.png|alt=http://egon.cheme.cmu.edu/Papers/IMAGrossmannRuiz.pdf|frameless|600x600px]]<br />
<br />
== Applications ==<br />
GDP formulations are useful for real-world applications where multiple branches are available when making decisions. Solving the GDP in these instances will allow the user to calculate which decisions should be made at each branching point in order to get the optimal solution. This disjunctive formulation is common in complex chemical reactions and production planning.<br />
[[File:Process network example.png|none|thumb|600x600px|Figure 2: Process Network Example. Each decision point represents another disjunctive set. <ref name=":1" />]]<br />
The process network depicted in the Figure 2 depicts multiple decisions that could be made to all end up at the goal (B) in a chemical reaction. This problem is able to be formulated into a GDP in order to figure out which route should be taken in order to maximize the profit. <br />
[[File:GDP numeric example 1.png|none|thumb|600x600px|Figure 3: A more complex process network.<ref name=":1" />]]<br />
This same idea can be scaled to larger problems with more complex branching. Figure 3 illustrates a larger process network and all of the different decision points. This problem is able to be formulated into a GDP so that the most optimal route can be calculated to take through the network.<br />
== Conclusion ==<br />
GDP is a programming method that applies disjunctive programming to MINLP problems. This method facilitates modeling discrete or continuous optimization problems by implementing algebraic constraints and logic expressions. The formulation of a GDP consists of Boolean and continuous variables and disjunctions and logic propositions. In the case of convex functions, GDPs can be reformulated using the BM and the HR methods. Formulation methods also include logic based methods disjunctive branch and bound and decomposition. Once reformulated into a standard MINLP, standard MILNP solvers, such as DICOPT<ref name=":3" />, SBB<ref name=":4" />, α-ECP<ref name=":5" /> and BARON<ref name=":6" />, can be used to determine optimal solutions<ref name=":0" />. The GDP method has important applications that include the optimization of complex chemical reactions and process planning. <br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Mixed-integer_linear_fractional_programming_(MILFP)&diff=2736Mixed-integer linear fractional programming (MILFP)2020-12-21T11:38:35Z<p>Wc593: </p>
<hr />
<div>Author: Xiang Zhao (SysEn 6800 Fall 2020)<br />
<br />
==Introduction==<br />
The mixed-integer linear fractional programming (MILFP) is a kind of mixed-integer nonlinear programming (MINLP) that is widely applied in chemical engineering,<sup>[https://aiche.onlinelibrary.wiley.com/doi/full/10.1002/btpr.2479]</sup> environmental engineering,<sup>[http://ourspace.uregina.ca/handle/10294/5449]</sup> and their hybrid field ranging from cyclic-scheduling problems to the life cycle optimization (LCO).<sup>[https://pubs.acs.org/doi/abs/10.1021/acssuschemeng.7b00002?casa_token=hJNBUOc-zyIAAAAA:8gqZM144_Hjovhq_fLHXRQT66FGp0tf6oZ3rWiuRJLD4YKp4f1S44UkUspsNZuCCrcCFIWYME1v0dGPYLA]</sup> Specifically, the objective function of the MINFP is shown as a ratio of two linear functions formed by various continuous variables and discrete variables. However, the pseudo-convexity and the combinatorial nature of the fractional objective function can cause computational challenges to the general-purpose global optimizers, such as [[wikipedia:BARON|BARON]], to solve this MILFP problem.<sup>[https://www.sciencedirect.com/science/article/pii/S0098135413003396?casa_token=Y6pefF84TQAAAAAA:ALrnGQIOGXr3SA-oqbD3FmlFsMyjp_z4zgmY8LkWscSWtbO8pMjFGix35FsroEVxI9ut0mWjffZc]</sup> In this regard, we introduce the basic knowledge and solution steps of three algorithms, namely the Parametric Algorithm, Reformulation-Linearization method, and Branch-and-Bound with Charnes-Cooper Transformation Method, to efficiently and effectively tackle this computational challenge.<br />
<br />
==Standard Form and Properties==<br />
Consider such standard form of the MILFP:<br />
<br />
<math>\begin{align} \max T(x,y)={c_0+\sum_{i}c_{1,i}m_i+\sum_{j}c_{2,j}y_j \over d_0+\sum_{i}d_{1,i}m_i+\sum_{j}d_{2,j}y_j}\\<br />
<br />
s.t.\quad\ a_{0,k}+\sum_{i}a_{1,i}m_i+\sum_{j}a_{2,j}y_j=0,\quad \forall k \in K\\<br />
<br />
m_i\ge0,\quad \forall i \in I\\<br />
<br />
y_j\in {0,1},\quad \forall j \in J \end{align}</math><br />
<br />
The properties of the objective function <math>T(x,y)</math> are shown as follows:<sup>[https://www.sciencedirect.com/science/article/pii/S0098135409001367?casa_token=Sj60B1tEjccAAAAA:kMeO3BLDWNBd7jkBDqcpR5nTrB3yryQ8_CNqyN1mMooiuZxSiLfoVwtkDuU3cTWu4e0FsmeWN_uw]</sup> <br />
# <math>T(x,y)</math> is (strictly) pseudoconcave and pseudoconvex over its domain.<br />
# The local optimality of <math>T(x,y)</math> is the same as its global optimality.<br />
<br />
Notably, several nonlinear solvers that can deal with the pseudoconvexity, such as the spatial branch-and-bound (SBB),<sup>[https://link.springer.com/article/10.1007/BF01106605]</sup> are capable of solving the MILFP. However, the memory usage of these solvers is enormous when solving a large-scale problem that is applied in industrial scheduling or [[wikipedia:supply chain|supply chain]] optimization project. Hence, we introduce the parametric algorithm, and reformulation-linearization method, which can reformulate the MILFP into the mixed-integer linear programming (MILP) problem, to reduce the memory usage and enhance solution efficiency.<br />
<br />
==Parametric Algorithm==<br />
One way to successively reformulate and solve the MILFP is to apply the parametric algorithm, which can find the global optimality within finite iterations. The linearly [[wikipedia:parametric form|parametric form]] of the reformulated objective function has the advantage of directly finding the [[wikipedia:global optimum|global optimum]], while the size of the sub-problem remains the same. The reformulation approach is shown as follows:<br />
<br />
The original form of the objective function is:<br />
<math>T(x,y)={c_0+\sum_{i}c_{1,i}m_i+\sum_{j}c_{2,j}y_j \over d_0+\sum_{i}d_{1,i}m_i+\sum_{j}d_{2,j}y_j}</math><br />
<br />
We use a parametric parameter <math>q</math> to reformulate the objective function <math>T(x,y)</math> into <math>M(x,y,q)</math>:<br />
<br />
<math>\max T(x,y)={A(x,y) \over B(x,y)}</math><br />
<br />
is reformulated into <br />
<br />
<math>\max M(x,y,q)=A(x,y)-q*B(x,y)</math><br />
<br />
<math>A(x,y)={c_0+\sum_{i}c_{1,i}m_i+\sum_{j}c_{2,j}y_j}</math><br />
<br />
<math>B(x,y)={d_0+\sum_{i}d_{1,i}m_i+\sum_{j}d_{2,j}y_j}</math><br />
<br />
Notably, the optimal solution of the parametric objective function <math>M(x,y,q)</math> has only one zero-point, which is the same as its global optimal solution. Hence, we need to find the zero-point iteratively following the approaches below:<sup>[https://ieeexplore.ieee.org/abstract/document/6858622?casa_token=jvj28BEMe0cAAAAA:utpZe4zST7nz0SVcdNUoX-CjmqmtU_v3CZnU-oTAxvR8B7ZV2iBjyhqDy3s-228w7Aw4_lcJjFw]</sup><br />
<br />
# Initialize the parametric parameter <math>q=0</math>. Set the tolerance <math>tol=10^{-6}</math> <br />
# Solve the sub-problem via using [[wikipedia:CPLEX|CPLEX]], whose objective function is <math>M(x,y,q)</math> with the same original constraints. The optimal solution is <math>x^{*},y^{*}</math>. <br />
# Calculate the value of parametric objective function <math>M(x^{*},y^{*},q)=A(x^{*},y^{*})-q*B(x^{*},y^{*})</math>, if the value is within the tolerance <math>tol</math>, then the optimality <math>(x^{*},y^{*})</math> is found.<br />
# Update the parametric parameter <math>q={A(x^{*},y^{*}) \over B(x^{*},y^{*})}</math> and redo step 1.<br />
<br />
==Reformulation-Linearization Method==<br />
The reformulation-linearization method, which incorporates the Glover’s linearization into the Charnes-Cooper transformation,<sup>[https://onlinelibrary.wiley.com/doi/abs/10.1002/nav.3800200308?casa_token=BsaOkI0dilIAAAAA:wPELPH83o1FuB9xHW8rRDhwInT3xsGjqqqk6LWID7WYpLexkAhgiymU-4-ew7c0nEoC3wM49-oFHa1m5]</sup> introduce auxiliary variables to reformulate the MILFP into equivalent MINLP. The resulting MINLP is subsequently transformed into MILP, which can be efficiently solved by typical MILP solvers like [[wikipedia:CPLEX|CPLEX]], via using Glover’s linearization.The reformulation approach is shown as follows:<br />
<br />
The original form of the optimization model is:<br />
<br />
<math>\begin{align} \max T(x,y)={c_0+\sum_{i}c_{1,i}m_i+\sum_{j}c_{2,j}y_j \over d_0+\sum_{i}d_{1,i}m_i+\sum_{j}d_{2,j}y_j}\\<br />
<br />
s.t.\quad\ a_{0,k}+\sum_{i}a_{1,i}m_i+\sum_{j}a_{2,j}y_j=0,\quad \forall k \in K\\<br />
<br />
m_i\ge0,\quad \forall i \in I\\<br />
<br />
y_j\in {0,1},\quad \forall j \in J \end{align}</math><br />
<br />
Firstly, we convert the fractional objective function into a bilinear constraint, as well as a substutional term <math>g_i</math>:<br />
<br />
<math>u={1\over d_0+\sum_{i}d_{1,i}m_i+\sum_{j}d_{2,j}y_j}</math><br />
<br />
<math>g_i={m_i*u}</math><br />
<br />
<math>h_j={y_j*u}</math><br />
<br />
To get the MILP equivalent model, we use the Glover's Linearization to transform the bilinear constraint (<math>h_j={y_j*u}</math>):<br />
<br />
<math>h_j={y_j*u}</math><br />
<br />
is equivalent to<br />
<br />
<math>h_j\leq u,\quad \forall j \in J</math><br />
<br />
<math>h_j\leq M*y_j,\quad \forall j \in J</math><br />
<br />
<math>h_j\geq u-M*y_j,\quad \forall j \in J</math><br />
<br />
<math>h_j\geq 0,\quad \forall j \in J</math><br />
<br />
<math>u\geq 0,\quad \forall j \in J</math><br />
<br />
<math>g_i\geq 0,\quad \forall i \in I</math><br />
<br />
<math>h_j\geq 0,\quad \forall j \in J</math><br />
<br />
<math>y_j\in {0,1},\quad \forall j \in J</math><br />
<br />
In this regard, we reformulate the original MILFP model into MILP model, which can be effectively solved by a typical [[wikipedia:branch and cut|branch-and-cut]] solver like [[wikipedia:CPLEX|CPLEX]]. To summarize, the reformulated MILP model is shown below:<br />
<br />
<math>\begin{align}\max W(u,g,h)={c_0*u+\sum_{i}c_{1,i}g_i+\sum_{j}c_{2,j}h_j}\\<br />
<br />
s.t.\quad\ a_{0,k}*u+\sum_{i}a_{1,i}g_i+\sum_{j}a_{2,j}h_j=0,\quad \forall k \in K\\<br />
<br />
d_0*u+\sum_{i}d_{1,i}g_i+\sum_{j}d_{2,j}h_j=1\\<br />
<br />
h_j\leq u,\quad \forall j \in J\\<br />
<br />
h_j\leq M*y_j,\quad \forall j \in J\\<br />
<br />
h_j\geq u-M*y_j,\quad \forall j \in J\\<br />
<br />
h_j\geq 0,\quad \forall j \in J\\<br />
<br />
u\geq 0,\quad \forall j \in J\\<br />
<br />
g_i\geq 0,\quad \forall i \in I\\<br />
<br />
h_j\geq 0,\quad \forall j \in J\\<br />
<br />
y_j\in {0,1},\quad \forall j \in J\end{align}</math><br />
<br />
==Branch-and-Bound with Charnes-Cooper Transformation Method==<br />
The integration of the Charnes-Cooper transformation method with the [[wikipedia:Branch and Bound|Branch-and-Bound (B&B)]] algorithm can reformulate the relaxation form of the fractional objective problem in each node into an LP subproblem, which can reach its global optimality via using MILP solvers like [[wikipedia:CPLEX|CPLEX]]. Since solution steps are similar to those of B&B and the reformulation step is shown in Reformulation-Linearization Method, we encourage readers to search for B&B algorithm and paper of Gao et al.<sup>[https://aiche.onlinelibrary.wiley.com/doi/full/10.1002/aic.14705]</sup><br />
<br />
==Application and Modeling for Numerical Examples==<br />
<br />
=== Applications of MILFP ===<br />
<br />
Two typical applications are introduced in this section, namely [[wikipedia:cyclic scheduling|cyclic scheduling]] and life-cycle optimization.<sup>[https://pubs.acs.org/doi/abs/10.1021/acssuschemeng.7b00631#:~:text=Life%20cycle%20optimization%20(LCO)%20enables,and%20optimization%20of%20process%20alternatives.]</sup> One typical cyclic scheduling problem was illustrated in Yue et al.<sup>[https://www.sciencedirect.com/science/article/pii/S0098135413000781]</sup>, the fractional objective was optimized to reflect both the absolute profit and scheduling aspect, which were shown in the numerator and denominator, respectively. The combination of Reformulation-Linearization method and [[wikipedia:CPLEX|CPLEX]] were regarded as solution algorithm and the optimization framework was applied in a case study corresponding to a multiproduct batch plant that used 14 processing stages for producing three acrylic fiber formulations with a time horizon of 100 h. <br />
<br />
In the life-cycle optimization problem of a certain processing system, the unreasonably maximum or minimum treatment amount from optimizing linear objective functions can be avoided via optimizing the fractional objective, and thus the balanced processing amount can be obtained to address the sustainable design and synthesis of this processing system.<sup>[https://pubs.acs.org/doi/abs/10.1021/acssuschemeng.7b03198?casa_token=dokyfl7kzigAAAAA:Z6riBbyfYZgakqA0Qw6du37ClfOFBuBKQxcuExnuWUvniwFWEjx17ivfLo4uvTgsl4eMBukRxfLYXW6dsA]</sup> Notably, the functional unit is shown in the denominator, while the total economic and environmental performances are denoted as numerators in fractional objective functions. Specifically as illustrated in Gong et al.,<sup>[https://aiche.onlinelibrary.wiley.com/doi/full/10.1002/aic.15882]</sup> the sustainable design and synthesis of the shale gas processing system was obtained via optimizing the unit net present value (NPV), unit global warming potential (GWP), and unit freshwater consumption simultaneously. The optimization framework was applied in the Marcella Shale gas site. <br />
<br />
In the next two numerical examples, we present a “simple form” of MILFP that can be used for selecting optimal processing pathways via maximizing unit NPV or minimizing unit GWP, respectively.<br />
<br />
=== Numerical Examples of MILFP ===<br />
<br />
==== Introduction of Numerical Examples ====<br />
<br />
[[File:Opti wiki.jpg|thumb|right|Figure 1. Superstructure of the chemical processing system]]<br />
Let’s consider a simple chemical plant, whose superstructure is shown on the right side. The superstructure denotes all technology options, and only one of them in each level can be chosen simultaneously. To find the optimal processing pathway on the basis of economic and environmental aspects, we consider maximizing the [[wikipedia:net present value|net present value (NPV)]] or minimizing unit greenhouse gas (GHG) emissions, respectively. Notably, the unit NPV equals the ratio of the NPV with the total mass flow rate of product I within the project lifespan of ten years. The discount rate is 10%.<br />
<br />
==== Input Parameters of Numerical Examples ====<br />
<br />
{| class="wikitable"<br />
|+ Conversion Rate of each Chemical<br />
|-<br />
! Processing Level !! Conversion Rate !! Conversion Rate !! Conversion Rate<br />
|-<br />
| Level 1 || D to E: 0.8 || D to F: 0.9<br />
|-<br />
| Level 2 || E to G: 0.7 || E to H: 0.8 || F to H: 0.4<br />
|-<br />
| Level 3 || G to I: 0.5 || H to I: 0.6<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Fixed Capital Cost for each Technology Alternative ($)<br />
|-<br />
! A1 !! A2 !! A3 !! B1 !! B2 !! B3 !! C1 !! C2 !! C3<br />
|-<br />
| 6,000,000 || 7,000,000 || 7,500,000 || 5,000,000 || 6,000,000 || 7,500,000 || 11,000,000 || 10,000,000 || 10,500,000<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Variable Capital Cost for each Technology Alternative ($/(ton/yr))<br />
|-<br />
! A1 !! A2 !! A3 !! B1 !! B2 !! B3 !! C1 !! C2 !! C3<br />
|-<br />
| 50 || 40 || 35 || 60 || 55 || 45 || 30 || 35 || 33<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Operating Cost for each Technology Alternative ($/(ton/yr))<br />
|-<br />
! A1 !! A2 !! A3 !! B1 !! B2 !! B3 !! C1 !! C2 !! C3<br />
|-<br />
| 25 || 30 || 20 || 30 || 28 || 50 || 27 || 25 || 15<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Feedstock Supply and Demand of Product (ton/yr)/Feedstock and Product Price ($/(ton/yr))<br />
|-<br />
! Item !! Supply/Demand !! Feedstock/Product Price<br />
|-<br />
| D || 2,000,000 || 100<br />
|-<br />
| I || 200,000 || 2000<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ GHG Emissions from each Technology Option (ton CO<sub>2</sub>-eq/ton inlet chemicals)<br />
|-<br />
! A1 !! A2 !! A3 !! B1 !! B2 !! B3 !! C1 !! C2 !! C3<br />
|-<br />
| 1.2 || 0.9 || 0.7 || 1.4 || 1.6 || 1.3 || 2.1 || 2.4 || 2.7<br />
|}<br />
<br />
==== Nomenclatures for the Mathematical Model of the Numerical Examples ====<br />
<br />
{| class="wikitable"<br />
|+ Nomenclature<br />
|-<br />
! Nomenclature !! Meaning<br />
|-<br />
| ''<math>I</math>'' || Set of production stages indexed by <math>i</math>.<br />
|-<br />
| ''<math>J</math>'' || Set of process alternatives <math>j</math>.<br />
|-<br />
| ''<math>D</math>'' || Demand of product I.<br />
|-<br />
| ''<math>CAV_{i,j}</math>'' || Unit variable capital cost in the process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>CV_{i,j}</math>'' || Conversion rate from input flow to output flow in the process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>FIXI_{i,j}</math>'' || Fixed capital cost in the process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>GHG_{i,j}</math>'' || Unit GHG emissions from the process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>OPERI_{i,j}</math>'' || Unit operating cost in the process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>PRI</math>'' || Price of product I.<br />
|-<br />
| ''<math>PRID</math>'' || Price of chemical D.<br />
|-<br />
| ''<math>S</math>'' || Supply of chemical D.<br />
|-<br />
| ''<math>y_{i,j}</math>'' || 0-1 variable. Equals to one if the process alternative <math>j</math> at the production stage <math>i</math> is selected. <br />
|-<br />
| ''<math>ca_{i,j}</math>'' || Capacity of process alternative <math>j</math> at the production stage <math>i</math>. <br />
|-<br />
| ''<math>fec</math>'' || Total feedstock cost. <br />
|-<br />
| ''<math>fix</math>'' || Total fixed capital cost. <br />
|-<br />
| ''<math>ghgt</math>'' || Total GHG emissions.<br />
|-<br />
| ''<math>mi_{i,j}</math>'' || Mass flow rate of the feedstock flow to process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>mo_{i,j}</math>'' || Mass flow rate of the output flow to process alternative <math>j</math> at the production stage <math>i</math>.<br />
|-<br />
| ''<math>oper</math>'' || Total operating cost. <br />
|-<br />
| ''<math>objc</math>'' || Unit net present value (NPV).<br />
|-<br />
| ''<math>obje</math>'' || Unit GHG emissions (within one operating year).<br />
|-<br />
| ''<math>npv</math>'' || Net present value.<br />
|-<br />
| ''<math>sale</math>'' || Total sales.<br />
|-<br />
| ''<math>vai</math>'' || Total variable capital cost. <br />
|}<br />
<br />
<br />
===== Mass Balance Constraints =====<br />
<br />
<math>mi_{i,j} \leq ca_{i,j},\quad \forall i \in I,\forall j \in J</math><br />
<br />
This aforementioned constraint denotes that the mass flow rate of the inlet flow should not exceed the treatment capacity.<br />
<br />
<math>ca_{i,j} \leq M*y_{i,j},\quad \forall i \in I,\forall j \in J</math><br />
<br />
This aforementioned constraint represents that the treatment capacity would be zero if the corresponding technology option is not selected.<br />
<br />
<math>mo_{i,j} = mi_{i,j}*CV_{i,j},\quad \forall i \in I,\forall j \in J</math><br />
<br />
This aforementioned constraint illustrates the conversion of the inlet flow and the outlet flow.<br />
<br />
<math>\sum_{j}mo_{(i-1),j} = \sum_{j}mi_{i,j},\quad \forall i \geq 2,\forall j \in J</math><br />
<br />
This aforementioned constraint denotes that the summation of mass flow rates of the outlet flows from the previous processing level equals to those of the inlet flows in the next processing level.<br />
<br />
<math>\sum_{j}mo_{3,j} \geq D,\quad \forall i \in I,\forall j \in J</math><br />
<br />
This aforementioned constraint represents that the summation of mass flow rates of the technology options in the third processing level should be larger than the demand of the product I.<br />
<br />
<math>\sum_{j}mi_{1,j} \leq S,\quad \forall i \in I,\forall j \in J</math><br />
<br />
This aforementioned constraint represents that the summation of mass flow rates of the technology options in the first processing level should be less than the supply of the chemical D.<br />
<br />
===== Superstructure Configuration Constraints =====<br />
<br />
Notably, the superstructure configuration constraints illustrate the logic relationship between each technology option within the superstructure. If the binary variable <math>y_{i,j}</math> equals to 1, than the technology option <math>j</math> in the process level <math>i</math> is selected.<br />
<br />
<math>y_{1,1}+y_{1,2}+y_{1,3} = 1</math><br />
<br />
<math>y_{1,1}+y_{1,2}=y_{2,1}+y_{2,2}</math><br />
<br />
<math>y_{1,3}=y_{2,3}</math><br />
<br />
<math>y_{2,1}=y_{3,1}</math><br />
<br />
<math>y_{2,2}+y_{2,3}=y_{3,2}+y_{3,3}</math><br />
<br />
===== Economic Evaluation Constraints =====<br />
<br />
We consider the fixed capital cost (<math>fix</math>), variable capital cost (<math>vai</math>), operating cost (<math>oper</math>), and feedstock cost (<math>fec</math>) as the expenses for the chemical processing system. <br />
<br />
<math>fix=\sum_{i}{\sum_{j}FIXI_{i,j}*y_{i,j}}</math><br />
<br />
<math>vai=\sum_{i}{\sum_{j}CAV_{i,j}*ca_{i,j}}</math><br />
<br />
<math>oper=\sum_{i}{\sum_{j}OPERI_{i,j}*mo_{i,j}}</math><br />
<br />
<math>fec=PRID*\sum_{j}mi_{1,j}</math><br />
<br />
<math>sale=PRI*\sum_{j}mo_{3,j}</math><br />
<br />
The net present value is calculated in the constraint below (<math>fix</math>), where we account for the total discounted cash flow and <math>SP</math> represents for the lifespan of the this project. <br />
<br />
<math>npv={DR*(1+DR)^{SP} \over (1+DR)^{SP}-1}*(sale-(vai+oper+fec))-fix</math><br />
<br />
===== Environmental Evaluation Constraint =====<br />
<br />
The total GHG emissions from the chemical processing system is calculated in the constraint below.<br />
<math>ghgt=\sum_{i}{\sum_{j}GHG_{i,j}*mi_{i,j}}</math><br />
<br />
===== Objective Functions =====<br />
<br />
Two numerical examples are presented as optimizing each fractional objective function, which is shown as below. Since all constraints denote the relationship between various continuous and discrete variables, two aforementioned numerical problems can be regarded as MILFPs. We consider maximizing unit NPV (<math>obje</math>) or minimizing unit global warming potential [[wikipedia:global warming potential|global warming potential (GWP)]] (<math>objc</math>) in two numerical examples, respectively. <br />
<br />
<math>obje={npv \over \sum_{j}mo_{3,j}}</math><br />
<br />
<math>objc={ghgt \over \sum_{j}mo_{3,j}}</math><br />
<br />
==Solution for Numerical Examples==<br />
<br />
===Maximizing Unit NPV===<br />
<br />
We consider the first objective function (<math>obje</math>) and all constraints in the mathematical model, where we can reformulate the objective function into a parametric form (<math>obj_1</math>) using parametric parameter <math>q_1</math>:<br />
<br />
<math>\max \quad\ obj_1={npv-q_1*\sum_{j}mo_{3,j}}</math><br />
<br />
<math>s.t.\quad\ Mass \ \ Balance\ \ Constraints, Superstructure\ \ Configuration\ \ Constraints, Economic\ \ Evaluation\ \ Constraints</math><br />
<br />
This reformulated model can be solved by the [[wikipedia:CPLEX|CPLEX]] iteratively, and the solution is shown as follows:<br />
{| class="wikitable"<br />
|+ Process to be built<br />
|-<br />
! Level 1 !! Level 2 !! Level 3<br />
|-<br />
| A1 || - || -<br />
|-<br />
| - || B2 || -<br />
|-<br />
| - || - || C3<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Performance<br />
|-<br />
! Production Amount (ton/yr) !! Unit NPV ($/ton) !! Unit GHG Emissions (ton CO<sub>2</sub>-eq/ton products)<br />
|-<br />
| 768,000 || 187.75 || 10.18<br />
|}<br />
<br />
===Minimizing Unit GHG Emissions===<br />
<br />
We consider the first objective function (<math>objc</math>) and all constraints in the mathematical model, where we can reformulate the objective function into a parametric form (<math>obj_2</math>) using parametric parameter <math>q_2</math>:<br />
<br />
<math>\min \quad\ obj_2={ghgt-q_2*\sum_{j}mo_{3,j}}</math><br />
<br />
<math>s.t.\quad\ Mass\ \ Balance\ \ Constraints, Superstructure\ \ Configuration\ \ Constraints, Environmental\ \ Evaluation\ \ Constraints</math><br />
<br />
This reformulated model can be solved by the [[wikipedia:CPLEX|CPLEX]] iteratively, and the solution is shown as follows:<br />
{| class="wikitable"<br />
|+ Process to be built<br />
|-<br />
! Level 1 !! Level 2 !! Level 3<br />
|-<br />
| - || - || -<br />
|-<br />
| A2 || B2 || C2<br />
|-<br />
| - || - || -<br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Performance<br />
|-<br />
! Production Amount (ton/yr) !! Unit NPV ($/ton) !! Unit GHG Emissions (ton CO<sub>2</sub>-eq/ton products)<br />
|-<br />
| 768,000 || 186.23 || 9.68<br />
|}<br />
<br />
===Computational performance===<br />
The computational performances of branch-and-refine algorithm and BARON are shown in the table below, where we find that the former algorithm has advantage over the latter one. The optimal solutions for both algorithm are the same, which illustrates the global optimality of the solution from branch-and-refine algorithm.<br />
{| class="wikitable"<br />
|+ Computational Performance for maximizing unit NPV<br />
|-<br />
! CPUs for branch-and-refine (s) !! CPUs for BARON (s)<br />
|-<br />
| 0.125 || 98.2 <br />
|}<br />
<br />
{| class="wikitable"<br />
|+ Performance for BARON algorithm<br />
|-<br />
! Production Amount (ton/yr) !! Unit NPV ($/ton) !! Unit GHG Emissions (ton CO<sub>2</sub>-eq/ton products)<br />
|-<br />
| 768,000 || 187.75 || 10.18<br />
|}<br />
<br />
==Conclusion==<br />
The mixed-integer linear fractional programming (MILFP) is a kind of mixed-integer nonlinear programming (MINLP) that is implemented into evaluating the average performance of a certain project. The Parametric Algorithm, Reformulation-Linearization Method, and Branch-and-Bound with Charnes-Cooper Transformation Method are three typical algorithms that aim to tackle the computational challenge caused by the fractional objective. The optimization framework can be applied to the chemical engineering, environmental engineering, and their combined area such as life-cycle optimization.<br />
<br />
==References==<br />
# Liu, S., Gerontas, S., Gruber, D., Turner, R., Titchener‐Hooker, N. J., & Papageorgiou, L. G. (2017). Optimization‐based framework for resin selection strategies in biopharmaceutical purification process development. Biotechnology Progress, 33(4), 1116-1126.<br />
# Zhu, H. (2014). Inexact fractional optimization for multicriteria resources and environmental management under uncertainty (Doctoral dissertation, Faculty of Graduate Studies and Research, University of Regina).<br />
# Gao, J., & You, F. (2017). Economic and environmental life cycle optimization of noncooperative supply chains and product systems: modeling framework, mixed-integer bilevel fractional programming algorithm, and shale gas application. ACS Sustainable Chemistry & Engineering, 5(4), 3362-3381.<br />
# Zhong, Z., & You, F. (2014). Globally convergent exact and inexact parametric algorithms for solving large-scale mixed-integer fractional programs and applications in process systems engineering. Computers & Chemical Engineering, 61, 90-101.<br />
# You, F., Castro, P. M., & Grossmann, I. E. (2009). Dinkelbach's algorithm as an efficient method to solve a class of MINLP models for large-scale cyclic scheduling problems. Computers & Chemical Engineering, 33(11), 1879-1889.<br />
# Quesada, I., & Grossmann, I. E. (1995). A global optimization algorithm for linear fractional and bilinear programs. Journal of Global Optimization, 6(1), 39-76.<br />
# Zhong, Z., & You, F. (2014, June). Parametric algorithms for global optimization of mixed-integer fractional programming problems in process engineering. In 2014 American Control Conference (pp. 3609-3614). IEEE.<br />
# Charnes, A., & Cooper, W. W. (1973). An explicit general solution in linear fractional programming. Naval Research Logistics Quarterly, 20(3), 449-467.<br />
# Gao, J., & You, F. (2015). Optimal design and operations of supply chain networks for water management in shale gas production: MILFP model and algorithms for the water‐energy nexus. AIChE Journal, 61(4), 1184-1208.<br />
# Gong, J., & You, F. (2017). Consequential life cycle optimization: general conceptual framework and application to algal renewable diesel production. ACS Sustainable Chemistry & Engineering, 5(7), 5887-5911.<br />
# Yue, D., & You, F. (2013). Sustainable scheduling of batch processes under economic and environmental criteria with MINLP models and algorithms. Computers & Chemical Engineering, 54, 44-59.<br />
# Gao, J., & You, F. (2018). Integrated hybrid life cycle assessment and optimization of shale gas. ACS Sustainable Chemistry & Engineering, 6(2), 1803-1824.<br />
# Gong, J., & You, F. (2018). A new superstructure optimization paradigm for process synthesis with product distribution optimization: Application to an integrated shale gas processing and chemical manufacturing process. AIChE Journal, 64(1), 123-143.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Branch_and_cut&diff=2735Branch and cut2020-12-21T11:38:14Z<p>Wc593: </p>
<hr />
<div>Author: Lindsay Siegmundt, Peter Haddad, Chris Babbington, Jon Boisvert, Haris Shaikh (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
The Branch and Cut methodology was discovered in the 90s as a way to solve/optimize Mixed-Integer Linear Programs (Karamanov, Miroslav)<ref>Karamanov, Miroslav. “Branch and Cut: An Empirical Study.” ''Carnegie Mellon University'' , Sept. 2006, https://www.cmu.edu/tepper/programs/phd/program/assets/dissertations/2006-operations-research-karamanov-dissertation.pdf.</ref>. This concept is comprised of two known optimization methodologies - Branch and Bound and Cutting Planes. Utilizing these two tools allows for the Branch and Cut to find an optimal solution through relaxing the problem to produce the upper bound. Relaxing the problem allows for the complex problem to be simplified in order for it to be solve more easily. Furthermore, the upper bound represents the highest value the objective can take in order to be feasible. The optimal solution is found when the objective is equal to the upper bound (Luedtke, Jim)<ref>Luedtke, Jim. “The Branch-and-Cut Algorithm for Solving Mixed-Integer Optimization Problems.” ''Institute for Mathematicians and Its Applications'', 10 Aug. 2016, https://www.ima.umn.edu/materials/2015-2016/ND8.1-12.16/25397/Luedtke-mip-bnc-forms.pdf.</ref>. This methodology is critical to the future of optimization since it combines two common tools in order to utilize each component in order to find the optimal solution. Moving forward, the critical components of different methodologies could be combined in order to find optimality in a more simple and direct manner. <br />
<br />
== Methodology & Algorithm ==<br />
<br />
=== Methodology ===<br />
{| class="wikitable"<br />
|+Abbreviation Details<br />
!Acronym<br />
!Expansion<br />
|-<br />
|LP<br />
|Linear Programming<br />
|-<br />
|B&B<br />
|Branch and Bound<br />
|}<br />
<br />
==== Most Infeasible Branching: ====<br />
Most infeasible branching is a very popular method that picks the variable with fractional part closest to <math>0:5</math>, i.e.,<math> si = 0:5-|xA_i- xA_i-0:5|</math><ref>Branching rules revisited Tobias Achterberga;∗, Thorsten Kocha, Alexander Martinb https://www-m9.ma.tum.de/downloads/felix-klein/20B/AchterbergKochMartin-BranchingRulesRevisited.pdf</ref>. Most infeasible branching picks a variable where the least tendency can be recognized to which side the variable should be rounded. However, the performance of this method is not any superior to the rule of selecting a variable randomly.<br />
<br />
==== '''Strong Branching:''' ====<br />
For each fractional variable, strong branching tests the dual bound increase by computing the LP relaxations result from the branching on that variable. As a branching variable for the current node, the variable that leads to the largest increases is selected. Despite its obvious simplicity, strong branching is so far the most powerful branching technique in terms of the number of nodes available in the B&B tree, this effectiveness can however be accomplished only at the cost of computation.<ref>A Branch-and-Cut Algorithm for Mixed Integer Bilevel Linear Optimization Problems and Its Implementation<nowiki/>https://coral.ise.lehigh.edu/~ted/files/papers/MIBLP16.pdf</ref><br />
<br />
==== '''Pseudo Cost:''' ====<br />
[[File:Image.png|thumb|Pure psuedo cost branching]]<br />
<br />
Another way to approximate a relaxation value is by utilizing a pseudo cost method. The pseudo-cost of a variable is an estimate of the per unit change in the objective function from making the value of the variable to be rounded up or down. For each variable we choose variable with the largest estimated LP objective gain<ref>Advances in Mixed Integer Programming http://scip.zib.de/download/slides/SCIP-branching.ppt</ref>. <br />
==='''Algorithm'''===<br />
Branch and Cut for is a variation of the Branch and Bound algorithm. Branch and Cut incorporates Gomery cuts allowing the search space of the given problem. The standard Simplex Algorithm will be used to solve each Integer Linear Programming Problem (LP).<br />
<br />
<br />
<math>min: c^tx<br />
</math><br />
<br />
<math>s.t. Ax < b<br />
</math><br />
<br />
<math>x \geq 0<br />
</math><br />
<br />
<math>x_i = int, i = 1,2,3...,n<br />
</math><br />
<br />
Above is a mix-integer linear programming problem. x and c are a part of the n-vector. These variables can be set to 0 or 1 allow binary variables. The above problem can be denoted as <math>LP_n </math><br />
<br />
Below is an Algorithm to utilize the Branch and Cut algorithm with Gomery cuts and Partitioning:<br />
<br />
'''Step 0:'''<br />
Upper Bound = ∞<br />
Lower Bound = -∞<br />
'''Step 1. Initialize:'''<br />
<br />
Set the first node as <math>LP_0</math> while setting the active nodes set as <math>L</math>. The set can be accessed via <math>LP_n </math><br />
<br />
===='''Step 2. Terminate:'''====<br />
Step 3. Iterate through list L:<br />
<br />
While <math>L</math> is not empty (i is the index of the list of L), then:<br />
<br />
'''Step 3.1. Convert to a Relaxation:'''<br />
<br />
'''Solve 3.2.'''<br />
<br />
Solve for the Relaxed<br />
<br />
'''Step 3.3.'''<br />
If Z is infeasible:<br />
Return to step 3.<br />
else:<br />
Continue with solution Z.<br />
'''Step 4. Cutting Planes:'''<br />
If a cutting plane is found:<br />
then add to the Linear Relaxation problem (as a constraint) and return to step 3.2<br />
Else:<br />
Continue.<br />
'''Step 5. Pruning and Fathoming:'''<br />
<br />
(a)If ≥ Z:, then go to step 3.<br />
If Z^l <= Z AND X_i is an integral feasible:<br />
Z = Z^i<br />
Remove all Z^i from Set(L)<br />
'''Step 6. Partition'''<br />
<br />
Let <math>D^{lj=k}_{j=1}</math> be a partition of the constraint set <math>D</math> of problem <math>LP_l</math>. Add problems <math>D^{lj=k}_{j=1}</math> to L, where <math>LP^l_j</math> is <math>LP_l</math> with feasible region restricted to <math>D^l_j</math> and <math>Z_{lj}</math> for j=1,...k is set to the value of <math>Z^l</math> for the parent problem l. Go to step 3.<ref name=":0">Benders, J. F. (Sept. 1962), "Partitioning procedures for solving mixed-variables programming problems", Numerische Mathematik 4(3): 238–252.</ref><br />
<br />
==Numerical Example==<br />
First, list out the MILP:<br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to original LP<br />
<br />
<math>z =-19.56, x_1=1.88, x_2=1.72 </math><br />
<br />
<br />
Branch on x<sub>1</sub> to generate sub-problems<br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>x_1\geq2</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to fist branch sub-problem<br />
<br />
<math>z =-15, x_1=2, x_2=1</math><br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>x_1\leq1</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to second branch sub-problem<br />
<br />
<math>z =-14.5, x_1=1, x_2=1.5</math><br />
<br />
Adding a cut<br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>2x_1+x_2\leq 3</math><br />
<br />
<math>x_1\leq1</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to cut LP<br />
<br />
<math>z=-13.222,x_1=.778,x_2=1.444</math><br />
<br />
==Application==<br />
Several of the Branch and Cut applications are described below in more detail and how they can be used. These applications serve as methods in which Branch and Cut can be used to optimize various problems efficiently.<br />
<br />
=== '''Combinatorial Optimization''' ===<br />
Combinatorial Optimization is a great application for Branch and Cut. This style of optimization is the methodology of utilizing the finite known sets and information of the sets to optimize the solution. The original intent for this application was for maximizing flow as well as in the transportation industry (Maltby and Ross). This combinatorial optimization has also taken on some new areas where it is used often. Combinatorial Optimization is now an imperative component in studying artificial intelligence and machine learning algorithms to optimize solutions. The finite sets that Combinatorial Optimization tends to utilize and focus on includes graphs, partially ordered sets, and structures that define linear independence call matroids.<ref>[https://brilliant.org/wiki/combinatorial-optimization/ Maltby, Henry, and Eli Ross. “Combinatorial Optimization.” ''Brilliant Math & Science Wiki'', https://brilliant.org/wiki/combinatorial-optimization/.]</ref><br />
<br />
=== '''Bender’s Decomposition''' ===<br />
Bender’s Decomposition is another Branch and Cut application that is utilized widely in Stochastic Programming. Bender’s Decomposition is where you take the initial problem and divide into two distinct subsets. By dividing the problem into two separate problems you are able to solve each set easier than the original instance (Benders). Therefore the first problem within the subset created can be solved for the first variable set. The second sub problem is then solved for, given that first problem solution. Doing this allows for the sub problem to be solved to determine whether the first problem is infeasible (Benders). Bender’s cuts can be added to constrain the problem until a feasible solution can be found.<ref name=":0" /><br />
<br />
=== '''Large-Scale Symmetric Traveling Salesmen Problem''' ===<br />
The Large-Scale Symmetric Traveling Salesmen Problem is a common problem that was always looked into optimizing for the shortest route while visiting each city once and returning to the original city at the end. On a larger scale this style of problem must be broken down into subsets or nodes (SIAM). By constraining this style of problem such as the methods of Combinatorial Optimization, the Traveling Salesmen Problem can be viewed as partially ordered sets. By doing this on a large scale with finite cities you are able to optimize the shortest path taken and ensure each city is only visited once.<ref>Society for Industrial and Applied Mathematics. “SIAM Rev.” ''SIAM Review'', 18 July 2006, https://epubs.siam.org/doi/10.1137/1033004</ref><br />
<br />
=== '''Submodular Function''' ===<br />
Submodular Function is another function in which is used throughout artificial intelligence as well as machine learning. The reason for this is because as inputs are increased into the function the value or outputs decrease. This allows for a great optimization features in the cases stated above because inputs are continually growing. This allows for machine learning and artificial intelligence to continue to grow based on these algorithms (Tschiatschek, Iyer, and Bilmes)<ref>S. Tschiatschek, R. Iyer, H. Wei and J. Bilmes, Learning Mixtures of Submodular Functions for Image Collection Summarization, NIPS-2014.</ref>. By enforcing new inputs to the system the system will learn more and more to ensure it optimizes the solution that is to be made.<ref>A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008</ref><br />
<br />
==Conclusion==<br />
The Branch and Cut is an optimization algorithm used to optimize integer linear programming. It combines two other optimization algorithms - branch and bound and cutting planes in order to utilize the results from each method in order to create the most optimal solution. There are three different methodologies used within the specific method - most infeasible branching, strong branching, and pseudo code. Furthermore, Branch and Cut can be utilized it multiple scenarios - Submodular function, large-scale symmetric traveling salesmen problem, bender's decomposition, and combination optimization which increases the impact of the methodology. <br />
<br />
==Reference==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Heuristic_algorithms&diff=2734Heuristic algorithms2020-12-21T11:37:33Z<p>Wc593: </p>
<hr />
<div>Author: Anmol Singh (as2753) (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
In mathematical programming, a heuristic algorithm is a procedure that determines near-optimal solutions to an optimization problem. However, this is achieved by trading optimality, completeness, accuracy, or precision for speed.<ref> Eiselt, Horst A et al. Integer Programming and Network Models. Springer, 2011.</ref> Nevertheless, heuristics is a widely used technique for a variety of reasons:<br />
<br />
*Problems that do not have an exact solution or for which the formulation is unknown<br />
*The computation of a problem is computationally intensive<br />
*Calculation of bounds on the optimal solution in branch and bound solution processes<br />
==Methodology==<br />
Optimization heuristics can be categorized into two broad classes depending on the way the solution domain is organized:<br />
<br />
===Construction methods (Greedy algorithms)===<br />
The greedy algorithm works in phases, where the algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem.<ref><br />
''Introduction to Algorithms'' (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 16 "Greedy Algorithms".</ref> It is a technique used to solve the famous “traveling salesman problem” where the heuristic followed is: "At each step of the journey, visit the nearest unvisited city." <br />
<br />
====Example: Scheduling Problem====<br />
You are given a set of N schedules of lectures for a single day at a university. The schedule for a specific lecture is of the form (s time, f time) where s time represents the start time for that lecture, and similarly, the f time represents the finishing time. Given a list of N lecture schedules, we need to select a maximum set of lectures to be held out during the day such that none of the lectures overlaps with one another i.e. if lecture L<sub>i</sub> and L<sub>j</sub> are included in our selection then the start time of j ≥ finish time of i or vice versa. The most optimal solution to this would be to consider the earliest finishing time first. We would sort the intervals according to the increasing order of their finishing times and then start selecting intervals from the very beginning. <br />
<br />
===Local Search methods===<br />
The Local Search method follows an iterative approach where we start with some initial solution, explore the neighborhood of the current solution, and then replace the current solution with a better solution.<ref> Eiselt, Horst A et al. Integer Programming and Network Models. Springer, 2011.</ref> For this method, the “traveling salesman problem” would follow the heuristic in which a solution is a cycle containing all nodes of the graph and the target is to minimize the total length of the cycle.<br />
<br />
==== Example Problem ====<br />
Suppose that the problem P is to find an optimal ordering of N jobs in a manufacturing system. A solution to this problem can be described as an N-vector of job numbers, in which the position of each job in the vector defines the order in which the job will be processed. For example, [3, 4, 1, 6, 5, 2] is a possible ordering of 6 jobs, where job 3 is processed first, followed by job 4, then job 1, and so on, finishing with job 2. Define now M as the set of moves that produce new orderings by the swapping of any two jobs. For example, [3, 1, 4, 6, 5, 2] is obtained by swapping the positions of jobs 4 and 1.<br />
==Popular Heuristic Algorithms==<br />
<br />
===Genetic Algorithm===<br />
The term Genetic Algorithm was first used by John Holland.<ref>J.H. Holland (1975) ''Adaptation in Natural and Artificial Systems,'' University of Michigan Press, Ann Arbor, Michigan; re-issued by MIT Press (1992).</ref> They are designed to mimic the Darwinian theory of evolution, which states that populations of species evolve to produce more complex organisms and fitter for survival on Earth. Genetic algorithms operate on string structures, like biological structures, which are evolving in time according to the rule of survival of the fittest by using a randomized yet structured information exchange. Thus, in every generation, a new set of strings is created, using parts of the fittest members of the old set.<ref>Optimal design of heat exchanger networks, Editor(s): Wilfried Roetzel, Xing Luo, Dezhen Chen, Design and Operation of Heat Exchangers and their Networks, Academic Press, 2020, Pages 231-317, <nowiki>ISBN 9780128178942</nowiki>, https://doi.org/10.1016/B978-0-12-817894-2.00006-6.</ref> The algorithm terminates when the satisfactory fitness level has been reached for the population or the maximum generations have been reached. The typical steps are<ref>Wang FS., Chen LH. (2013) Genetic Algorithms. In: Dubitzky W., Wolkenhauer O., Cho KH., Yokota H. (eds) Encyclopedia of Systems Biology. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9863-7_412 </ref>:<br />
<br />
1. Choose an initial population of candidate solutions<br />
<br />
2. Calculate the fitness, how well the solution is, of each individual<br />
<br />
3. Perform crossover from the population. The operation is to randomly choose some pair of individuals like parents and exchange so parts from the parents to generate new individuals<br />
<br />
4. Mutation is to randomly change some individuals to create other new individuals<br />
<br />
5. Evaluate the fitness of the offspring<br />
<br />
6. Select the survive individuals<br />
<br />
7. Proceed from 3 if the termination criteria have not been reached<br />
<br />
===Tabu Search Algorithm===<br />
Tabu search (TS) is a heuristic algorithm created by Fred Glover<ref>Fred Glover (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research. '''13''' (5): 533–549,https://doi.org/10.1016/0305-0548(86)90048-1</ref> using a gradient-descent search with memory techniques to avoid cycling for determining an optimal solution. It does so by forbidding or penalizing moves that take the solution, in the next iteration, to points in the solution space previously visited. The algorithm spends some memory to keep a Tabu list of forbidden moves, which are the moves of the previous iterations or moves that might be considered unwanted. A general algorithm is as follows<ref>Optimization of Preventive Maintenance Program for Imaging Equipment in Hospitals, Editor(s): Zdravko Kravanja, Miloš Bogataj, Computer-Aided Chemical Engineering, Elsevier, Volume 38, 2016, Pages 1833-1838, ISSN 1570-7946, <nowiki>ISBN 9780444634283</nowiki>, https://doi.org/10.1016/B978-0-444-63428-3.50310-6.</ref>: <br />
<br />
1. Select an initial solution ''s''<sub>0</sub> ∈ ''S''. Initialize the Tabu List ''L''<sub>0</sub> = ∅ and select a list tabu size. Establish ''k'' = 0.<br />
<br />
2. Determine the neighborhood feasibility ''N''(''s<sub>k</sub>'') that excludes inferior members of the tabu list ''L<sub>k</sub>''.<br />
<br />
3. Select the next movement ''s<sub>k</sub>'' <sub>+ 1</sub> from ''N''(''S<sub>k</sub>'') or ''L<sub>k</sub>'' if there is a better solution and update ''L<sub>k</sub>'' <sub>+ 1</sub><br />
<br />
4. Stop if a condition of termination is reached, else, ''k'' = ''k'' + 1 and return to 1<br />
<br />
==== Example: The Classical Vehicle Routing Problem ====<br />
''Vehicle Routing Problems'' have very important applications in distribution management and have become some of the most studied problems in the combinatorial optimization literature. These include several Tabu Search implementations that currently rank among the most effective. The ''Classical Vehicle Routing Problem'' (CVRP) is the basic variant in that class of problems. It can formally be defined as follows. Let ''G'' = (''V, A'') be a graph where ''V'' is the vertex set and ''A'' is the arc set. One of the vertices represents the ''depot'' at which a fleet of identical vehicles of capacity ''Q'' is based, and the other vertices customers that need to be serviced. With each customer vertex v<sub>i</sub> are associated a demand q<sub>i</sub> and a service time t<sub>i</sub>. With each arc (v<sub>i</sub>, v<sub>j</sub>) of ''A'' are associated a cost c<sub>ij</sub> and a travel time t<sub>ij</sub>.<ref>Glover, Fred, and Gary A Kochenberger. Handbook Of Metaheuristics. Kluwer Academic Publishers, 2003.</ref> The CVRP consists of finding a set of routes such that:<br />
<br />
1. Each route begins and ends at the depot<br />
<br />
2. Each customer is visited exactly once by exactly one route<br />
<br />
3. The total demand of the customers assigned to each route does not exceed ''Q''<br />
<br />
4. The total duration of each route (including travel and service times) does not exceed a specified value ''L''<br />
<br />
5. The total cost of the routes is minimized<br />
<br />
A feasible solution for the problem thus consists of a partition of the customers into m groups, each of total demand no larger than ''Q'', that are sequenced to yield routes (starting and ending at the depot) of duration no larger than ''L''.<br />
<br />
===Simulated Annealing Algorithm===<br />
The Simulated Annealing Algorithm was developed by Kirkpatrick et. al. in 1983<ref>Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by Simulated Annealing. ''Science,'' ''220''(4598), 671-680. Retrieved November 25, 2020, from http://www.jstor.org/stable/1690046</ref> and is based on the analogy of ideal crystals in thermodynamics. The annealing process in metallurgy can make particles arrange themselves in the position with minima potential as the temperature is slowly decreased. The Simulation Annealing algorithm mimics this mechanism and uses the objective function of an optimization problem instead of the energy of a material to arrive at a solution. A general algorithm is as follows<ref>Brief review of static optimization methods, Editor(s): Stanisław Sieniutycz, Jacek Jeżowski, Energy Optimization in Process Systems and Fuel Cells (Third Edition), Elsevier, 2018, Pages 1-41, <nowiki>ISBN 9780081025574</nowiki>, https://doi.org/10.1016/B978-0-08-102557-4.00001-3.</ref> :<br />
<br />
1. Fix initial temperature (''T''<sup>0</sup>)<br />
<br />
2. Generate starting point '''x'''<sup>0</sup> (this is the best point '''''X'''''<sup>*</sup> at present)<br />
<br />
3. Generate randomly point '''''X<sup>S</sup>''''' (neighboring point)<br />
<br />
4. Accept '''''X<sup>S</sup>''''' as '''''X'''''<sup>*</sup> (currently best solution) if an acceptance criterion is met. This must be such a condition that the probability of accepting a worse point is greater than zero, particularly at higher temperatures<br />
<br />
5. If an equilibrium condition is satisfied, go to (6), otherwise jump back to (3).<br />
<br />
6. If termination conditions are not met, decrease the temperature according to a certain cooling scheme and jump back to (1). If the termination conditions are satisfied, stop calculations accepting the current best value '''''X'''''<sup>*</sup> as the final (‘optimal’) solution. <br />
<br />
== Numerical Example: Knapsack Problem ==<br />
One of the most common applications of the heuristic algorithm is the Knapsack Problem, in which a given set of items (each with a mass and a value) are grouped to have a maximum value while being under a certain mass limit. It uses the Greedy Approximation Algorithm to sort the items based on their value per unit mass and then includes the items with the highest value per unit mass if there is still space remaining.<br />
<br />
'''<big>Example</big>'''<br />
<br />
The following table specifies the weights and values per unit of five different products held in storage. The quantity of each product is unlimited. A plane with a weight capacity of 13 is to be used, for one trip only, to transport the products. We would like to know how many units of each product should be loaded onto the plane to maximize the value of goods shipped. <br />
{| class="wikitable"<br />
|+<br />
!<br />
Product (i) <br />
!Weight per unit (w<sub>i</sub>)<br />
!Value per unit (v<sub>i</sub>)<br />
|-<br />
|1<br />
|7<br />
|9<br />
|-<br />
|2<br />
|5<br />
|4<br />
|-<br />
|3<br />
|4<br />
|3<br />
|-<br />
|4<br />
|3<br />
|2<br />
|-<br />
|5<br />
|1<br />
|0.5<br />
|}<br />
'''<big>Solution:</big>'''<br />
<br />
'''(a) Stages:'''<br />
<br />
We view each type of product as a stage, so there are 5 stages. We can also add a sixth stage representing the endpoint after deciding<br />
<br />
'''(b) States:'''<br />
<br />
We can view the remaining capacity as states, so there are 14 states in each stage: 0,1, 2, 3, …13<br />
<br />
'''(c) Possible decisions at each stage:'''<br />
<br />
Suppose we are in state s in stage n (n < 6), hence there are s capacity remaining. Then the possible number of items we can pack is:<br />
<br />
j = 0, 1, …[s/w<sub>n</sub>]<br />
<br />
For each such action j, we can have an arc going from the state s in stage n to the state n – j*w<sub>n</sub> in stage n + 1. For each arc in the graph, there is a corresponding benefit j*v<sub>n</sub>. We are trying to find a maximum benefit path from state 13 in stage 1, to stage 6.<br />
<br />
'''(d) Optimization function:'''<br />
<br />
Let f<sub>n</sub>(s) be the value of the maximum benefit possible with items of type n or greater using total capacity at most s<br />
<br />
'''(e) Boundary conditions:'''<br />
<br />
The sixth stage should have all zeros, that is, f<sub>6</sub>(s) = 0 for each s = 0,1, … 13<br />
<br />
'''(f) Recurrence relation:'''<br />
<br />
f<sub>n</sub>(s) = max {j*v<sub>n</sub> + f<sub>n+1</sub>(s – j*w<sub>n</sub>)}, j = 0, 1, …, [s/w<sub>n</sub>]<br />
<br />
'''(g) Compute:'''<br />
<br />
The solution will not show all the computations steps. Instead, only a few cases are given below to illustrate the idea.<br />
<br />
* For stage 5, f<sub>5</sub>(s) = max<sub>j=0, 1, …[s/1]</sub> {j*0.5 + 0} = 0.5s because given the all zero states in stage 6, the maximum possible value is to use up all the remaining s capacity.<br />
* For stage 4, state 7,<br />
<br />
f<sub>4</sub>(7) = max<sub>j=0,1, …, [7/w4]</sub> = {j*v<sub>4</sub> + f<sub>5</sub>(7 - w<sub>4*</sub>j)}<br />
<br />
= max {0 + 3.5; 2 + 2; 4 + 0.5}<br />
<br />
= 4.5<br />
<br />
Using the recurrence relation above, we get the following table:<br />
{| class="wikitable"<br />
|+<br />
!Unused Capacity<br />
s<br />
!f<sub>1</sub>(s)<br />
!Type 1 <br />
opt<br />
!f<sub>2</sub>(s)<br />
!Type 2 <br />
opt<br />
!f<sub>3</sub>(s)<br />
!Type 3 <br />
opt<br />
!f<sub>4</sub>(s)<br />
!Type 4 <br />
opt<br />
!f<sub>5</sub>(s)<br />
!Type 5 <br />
opt<br />
!f<sub>6</sub>(s)<br />
|-<br />
|13<br />
|13.5<br />
|1<br />
|10<br />
|2<br />
|9.5<br />
|3<br />
|8.5<br />
|4<br />
|6.5<br />
|13<br />
|0<br />
|-<br />
|12<br />
|13<br />
|1<br />
|9<br />
|2<br />
|9<br />
|3<br />
|8<br />
|4<br />
|6<br />
|12<br />
|0<br />
|-<br />
|11<br />
|12<br />
|1<br />
|8.5<br />
|2<br />
|8<br />
|2<br />
|7<br />
|3<br />
|5.5<br />
|11<br />
|0<br />
|-<br />
|10<br />
|11<br />
|1<br />
|8<br />
|2<br />
|7<br />
|2<br />
|6.5<br />
|3<br />
|5<br />
|10<br />
|0<br />
|-<br />
|9<br />
|10<br />
|1<br />
|7<br />
|1<br />
|6.5<br />
|2<br />
|6<br />
|3<br />
|4.5<br />
|9<br />
|0<br />
|-<br />
|8<br />
|9.5<br />
|1<br />
|6<br />
|1<br />
|6<br />
|2<br />
|5<br />
|2<br />
|4<br />
|8<br />
|0<br />
|-<br />
|7<br />
|9<br />
|1<br />
|5<br />
|1<br />
|5<br />
|1<br />
|4.5<br />
|2<br />
|3.5<br />
|7<br />
|0<br />
|-<br />
|6<br />
|4.5<br />
|0<br />
|4.5<br />
|1<br />
|4<br />
|1<br />
|4<br />
|2<br />
|3<br />
|6<br />
|0<br />
|-<br />
|5<br />
|4<br />
|0<br />
|4<br />
|1<br />
|3.5<br />
|1<br />
|3<br />
|1<br />
|2.5<br />
|5<br />
|0<br />
|-<br />
|4<br />
|3<br />
|0<br />
|3<br />
|0<br />
|3<br />
|1<br />
|2.5<br />
|1<br />
|2<br />
|4<br />
|0<br />
|-<br />
|3<br />
|2<br />
|0<br />
|2<br />
|0<br />
|2<br />
|0<br />
|2<br />
|1<br />
|1.5<br />
|3<br />
|0<br />
|-<br />
|2<br />
|1<br />
|0<br />
|1<br />
|0<br />
|1<br />
|0<br />
|1<br />
|0<br />
|1<br />
|2<br />
|0<br />
|-<br />
|1<br />
|0.5<br />
|0<br />
|0.5<br />
|0<br />
|0.5<br />
|0<br />
|0.5<br />
|0<br />
|0.5<br />
|1<br />
|0<br />
|-<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|}<br />
'''Optimal solution:''' The maximum benefit possible is 13.5. Tracing forward to get the optimal solution: the optimal decision corresponding to the entry 13.5 for f<sub>1</sub>(1) is 1, therefore we should pack 1 unit of type 1. After that we have 6 capacity remaining, so look at f<sub>2</sub>(6) which is 4.5, corresponding to the optimal decision of packing 1 unit of type 2. After this, we have 6-5 = 1 capacity remaining, and f<sub>3</sub>(1) = f<sub>4</sub>(1) = 0, which means we are not able to pack any type 3 or type 4. Hence we go to stage 5 and find that f<sub>5</sub>(1) = 1, so we should pack 1 unit of type 5. This gives the entire optimal solution as can be seen in the table below:<br />
{| class="wikitable"<br />
|+<br />
! colspan="2" |Optimal solution<br />
|-<br />
!Product (i)<br />
!Number of units<br />
|-<br />
|1<br />
|1<br />
|-<br />
|2<br />
|1<br />
|-<br />
|5<br />
|1<br />
|}<br />
<br />
==Applications==<br />
Heuristic algorithms have become an important technique in solving current real-world problems. Its applications can range from optimizing the power flow in modern power systems<ref> NIU, M., WAN, C. & Xu, Z. A review on applications of heuristic optimization algorithms for optimal power flow in modern power systems. J. Mod. Power Syst. Clean Energy 2, 289–297 (2014), https://doi.org/10.1007/s40565-014-0089-4</ref> to groundwater pumping simulation models<ref> J. L. Wang, Y. H. Lin and M. D. Lin, "Application of heuristic algorithms on groundwater pumping source identification problems," 2015 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 2015, pp. 858-862, https://doi.org/10.1109/IEEM.2015.7385770.</ref>. Heuristic optimization techniques are increasingly applied in environmental engineering applications as well such as the design of a multilayer sorptive barrier system for landfill liner.<ref>Matott, L. Shawn, et al. “Application of Heuristic Optimization Techniques and Algorithm Tuning to Multilayered Sorptive Barrier Design.” Environmental Science &amp; Technology, vol. 40, no. 20, 2006, pp. 6354–6360., https://doi.org/10.1021/es052560+.</ref> Heuristic algorithms have also been applied in the fields of bioinformatics, computational biology, and systems biology.<ref>Larranaga P, Calvo B, Santana R, Bielza C, Galdiano J, Inza I, Lozano JA, Armananzas R, Santafe G, Perez A, Robles V (2006) Machine learning in bioinformatics. Brief Bioinform 7(1):86–112 </ref><br />
<br />
==Conclusion==<br />
Heuristic algorithms are not a panacea, but they are handy tools to be used when the use of exact methods cannot be implemented. Heuristics can provide flexible techniques to solve hard problems with the advantage of simple implementation and low computational cost. Over the years, we have seen a progression in heuristics with the development of hybrid systems that combine selected features from various types of heuristic algorithms such as tabu search, simulated annealing, and genetic or evolutionary computing. Future research will continue to expand the capabilities of existing heuristics to solve complex real-world problems.<br />
<br />
==References==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Column_generation_algorithms&diff=2733Column generation algorithms2020-12-21T11:37:13Z<p>Wc593: </p>
<hr />
<div>Author: Lorena Garcia Fernandez (lgf572) (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
Column Generation techniques have the scope of solving large linear optimization problems by generating only the variables that will have an influence on the objective function. This is important for big problems with many variables where the formulation with these techniques would simplify the problem formulation, since not all the possibilities need to be explicitly listed.<ref>Desrosiers, Jacques & Lübbecke, Marco. (2006). A Primer in Column Generation.p7-p14 10.1007/0-387-25486-2_1. </ref><br />
<br />
== Theory, methodology and algorithmic discussions ==<br />
'''''Theory'''''<br />
<br />
The way this method work is as follows; first, the original problem that is being solved needs to be split into two problems: the master problem and the sub-problem.<br />
<br />
* The master problem is the original column-wise (i.e: one column at a time) formulation of the problem with only a subset of variables being considered.<ref><br />
AlainChabrier, Column Generation techniques, 2019 URL: https://medium.com/@AlainChabrier/column-generation-techniques-6a414d723a64<br />
</ref><br />
<br />
* The sub-problem is a new problem created to identify a new promising variable. The objective function of the sub-problem is the reduced cost of the new variable with respect to the current dual variables, and the constraints require that the variable obeys the naturally occurring constraints. The subproblem is also referred to as the RMP or “restricted master problem”. From this we can infer that this method will be a good fit for problems whose constraint set admit a natural breakdown (i.e: decomposition) into sub-systems representing a well understood combinatorial structure.<ref><br />
AlainChabrier, Column Generation techniques, 2019 URL: https://medium.com/@AlainChabrier/column-generation-techniques-6a414d723a64<br />
</ref><br />
<br />
To execute that decomposition from the original problem into Master and subproblems there are different techniques. The theory behind this method relies on the Dantzig-Wolfe decomposition.<ref>Dantzig-Wolfe decomposition. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dantzig-Wolfe_decomposition&oldid=50750</ref><br />
<br />
In summary, when the master problem is solved, we are able to obtain dual prices for each of the constraints in the master problem. This information is then utilized in the objective function of the subproblem. The subproblem is solved. If the objective value of the subproblem is negative, a variable with negative reduced cost has been identified. This variable is then added to the master problem, and the master problem is re-solved. Re-solving the master problem will generate a new set of dual values, and the process is repeated until no negative reduced cost variables are identified. The subproblem returns a solution with non-negative reduced cost, we can conclude that the solution to the master problem is optimal.<ref>Wikipedia, the free encyclopeda. Column Generation. URL: https://en.wikipedia.org/wiki/Column_generation</ref><br />
<br />
'''''Methodology'''''<ref>L.A. Wolsey, Integer programming. Wiley,Column Generation Algorithms p185-p189,1998</ref><br />
[[File:Column Generation.png|thumb|468x468px|Column generation schematics<ref name=":4">GERARD. (2005). Personnel and Vehicle scheduling, Column Generation, slide 12. URL: https://slideplayer.com/slide/6574/</ref>]]<br />
Consider the problem in the form:<br />
<br />
(IP) <br />
<math>z=max\left \{\sum_{k=1}^{K}c^{k}x^{k}:\sum_{k=1}^{K}A^{k}x^{k}=b,x^{k}\epsilon X^{k}\; \; \; for\; \; \; k=1,...,K \right \}</math><br />
<br />
<br />
Where <math>X^{k}=\left \{x^{k}\epsilon Z_{+}^{n_{k}}: D^{k}x^{k}\leq d^{_{k}} \right \}</math> for <math>k=1,...,K</math>. Assuming that each set <math>X^{k}</math> contains a large but finite set of points <math>\left \{ x^{k,t} \right \}_{t=1}^{T_{k}}</math>, we have that <math>X^{k}=</math>:<br />
<br />
<math>\left \{ x^{k}\epsilon R^{n_{k}}:x^{k}=\sum_{t=1}^{T_{k}}\lambda _{k,t}x^{k,t},\sum_{t=1}^{T_{k}}\lambda _{k,t}=1,\lambda _{k,t}\epsilon \left \{ 0,1 \right \}for \; \; k=1,...,K \right \}</math><br />
<br />
Note that, on the assumption that each of the sets <math>X^{k}=</math> is bounded for <math>k=1,...,K</math> the approach will involve solving an equivalent problem of the form as below:<br />
<br />
<math>max\left \{ \sum_{k=1}^{K}\gamma ^{k}\lambda ^{k}: \sum_{k=1}^{K}B^{k}\lambda ^{k}=\beta ,\lambda ^{k}\geq 0\; \; integer\; \; for\; \; k=1,...,K \right \}</math><br />
<br />
where each matrix <math>B^{k}</math> has a very large number of columns, one for each of the feasible points in <math>X^{k}</math>, and each vector <math>\lambda ^{k}</math> contains the corresponding variables.<br />
<br />
<br />
Now, substituting for <math>x^{k}=</math> leads to an equivalent ''IP Master Problem (IPM)'':<br />
<br />
(IPM)<br />
<math>\begin{matrix}<br />
z=max\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left(c^{k}x^{k,t}\right )\lambda _{k,t} \\ \sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( A^{k}x^{k,t} \right )\lambda _{k,t}=b\\<br />
\sum_{t=1}^{T_{k}}\lambda _{k,t}=1\; \; for\; \; k=1,...,K \\<br />
\lambda _{k,t}\epsilon \left \{ 0,1 \right \}\; \; for\; \; t=1,...,T_{k}\; \; and\; \; k=1,...,K.<br />
\end{matrix}</math><br />
<br />
To solve the Master Linear Program, we use a column generation algorithm. This is in order to solve the linear programming relaxation of the Integer Programming Master Problem, called the ''Linear Programming Master Problem (LPM)'':<br />
<br />
(LPM)<br />
<math>\begin{matrix}<br />
z^{LPM}=max\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( c^{k}x^{k,t} \right )\lambda _{k,t}\\<br />
\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( A^{k}x^{k,t} \right )\lambda _{k,t}=b \\<br />
\sum_{t=1}^{T_{k}}\lambda _{k,t}=1\; \;for\; \; k=1,...,K \\<br />
\lambda _{k,t} \geq 0\; \; for\; \; t=1,...,T_{k},\; k=1,...,K<br />
\end{matrix}</math><br />
<br />
Where there is a column <math>\begin{pmatrix}<br />
c^{k}x\\ <br />
A^{k}x\\ <br />
e_{k}<br />
\end{pmatrix}</math> for each ''<math>x</math>'' ''<math display="inline">\in</math> <math display="inline">X^{k}</math>''. On the next steps of this method, we will use <math>\left \{ \pi _{i} \right \}_{i=1}^{m}</math> as the dual variables associated with the joint constraints, and <math>\left \{ \mu_{k} \right \}_{k=1}^{K}</math> as dual variables for the second set of constraints.The latter are also known as convexity constraints.<br />
The idea is to solve the linear program by the primal simplex algorithm. However, the pricing step of choosing a column to enter the basis must be modified because of the very big number of columns in play. Instead of pricing the columns one at a time, the question of finding a column with the biggest reduced price is itself a set of <math>K</math> optimization problems.<br />
<br />
<br />
''Initialization:'' we suppose that a subset of columns (at least one for each <math>k</math>) is available, providing a feasible ''Restricted Linear Programming Master Problem'':<br />
<br />
(RLPM)<br />
<math>\begin{matrix}<br />
z^{LPM}=max\tilde{c}\tilde{\lambda} \\<br />
\tilde{A}\tilde{\lambda }=b \\<br />
\tilde{\lambda }\geq 0 <br />
\end{matrix}</math><br />
<br />
<br />
where <math>\tilde{b}=\begin{pmatrix}<br />
b\\ <br />
1\\ <br />
\end{pmatrix}</math>, <math>\tilde{A}</math> is generated by the available set of columns and <math>\tilde{c}\tilde{\lambda }</math> are the corresponding costs and variables. Solving the RLPM gives an optimal primal solution <math>\tilde{\lambda ^{*}}</math> and an optimal dual solution <math>\left ( \pi ,\mu \right )\epsilon\; R^{m}\times R^{k}</math><br />
<br />
<br />
''Primal feasibility:'' Any feasible solution of ''RLMP'' is feasible for ''LPM''. More precisely, <math>\tilde{\lambda^{*} }</math> is a feasible solution of ''LPM'', and hence <math>\tilde{z}^{LPM}=\tilde{c}\tilde{\lambda ^{*}}=\sum_{i=1}^{m}\pi _{i}b_{i}+\sum_{k=1}^{K}\mu _{k}\leq z^{LPM}</math> <br />
<br />
''Optimality check for LPM:'' It is required to check whether <math>\left ( \pi ,\mu \right )</math> is dual feasible for ''LPM''. This means checking for each column, that is for each <math>k</math>, and for each <math>x\; \epsilon \; X^{k}</math> if the reduced price <math>c^{k}x-\pi A^{k}x-\mu _{k}\leq 0</math>. Rather than examining each point separately, we treat all points in <math>X^{k}</math> implicitly, by solving an optimization subproblem:<br />
<br />
<math>\zeta _{k}=max\left \{ \left (c^{k}-\pi A^{k} \right )x-\mu _{k}\; :\; x\; \epsilon \; X^{k}\right \}.</math> <br />
<br />
<br />
''Stopping criteria:'' If <math>\zeta _{k}> 0</math> for <math>k=1,...,K</math> the solution <math>\left ( \pi ,\mu \right )</math> is dual feasible for ''LPM'', and hence <math>z^{LPM}\leq \sum_{i=1}^{m}\pi _{i}b_{i}+\sum_{k=1}^{K}\mu _{k}</math>. As the value of the primal feasible solution <math>\tilde{\lambda }</math> equals that of this upper bound, <math>\tilde{\lambda }</math> is optimal for ''LPM''. <br />
<br />
<br />
''Generating a new column:'' If <math>\zeta _{k}> 0</math> for some <math>k</math>, the column corresponding to the optimal solution <math>\tilde{x}^{k}</math> of the subproblem has a positive reduced price. Introducing the column <math>\begin{pmatrix}<br />
c^{k}x\\ <br />
A^{k}x\\ <br />
e_{k}<br />
\end{pmatrix}</math> leads then to a Restricted Linear Programming Master Problem that can be easily reoptimized (e.g., by the primal simplex algorithm)<br />
<br />
== Numerical example: The Cutting Stock problem<ref>L.A. Wolsey, Integer programming. Wiley,Column Generation Algorithms p185-p189,1998The Cutting Stock problem</ref> ==<br />
<br />
Suppose we want to solve a numerical example of the cutting stock problem, specifically a one-dimensional cutting stock problem. <br />
<br />
''<u>Problem Overview</u>''<br />
<br />
A company produces steel bars with diameter <math>45</math> millimeters and length <math>33</math> meters. The company also takes care of cutting the bars for their different customers, who each require different lengths. At the moment, the following demand forecast is expected and must be satisfied: <br />
{| class="wikitable"<br />
|+<br />
|Pieces needed<br />
|Piece length(m)<br />
|Type of item<br />
|-<br />
|144<br />
|6<br />
|1<br />
|-<br />
|105<br />
|13.5<br />
|2<br />
|-<br />
|72<br />
|15<br />
|3<br />
|-<br />
|30<br />
|16.5<br />
|4<br />
|-<br />
|24<br />
|22.5<br />
|5<br />
|}<br />
The objective is to establish what is the minimum number of steel bars that should be used to satisfy the total demand.<br />
<br />
A possible model for the problem, proposed by Gilmore and Gomory in the 1960ies is the one below:<br />
<br />
'''Sets'''<br />
<br />
<math>K=\left \{ 1,2,3,4,5 \right \}</math>: set of item types;<br />
<br />
''<math display="inline">S</math>:'' set of patterns (i.e., possible ways) that can be adopted to cut a given bar into portions of the need lengths.<br />
<br />
'''Parameters'''<br />
<br />
<math display="inline">M</math>: bar length (before the cutting process);<br />
<br />
<math display="inline">L_k</math>'':'' length of item ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'';<br />
<br />
<math display="inline">R_s</math> : number of pieces of type ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'' required;<br />
<br />
<math display="inline">N_{k,s}</math> : number of pieces of type ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'' in pattern ''<math display="inline">s</math>'' ''<math display="inline">\in</math> <math display="inline">S</math>''.<br />
<br />
'''Decision variables'''<br />
<br />
<math display="inline">Y_s</math> : number of bars that should be portioned using pattern ''<math display="inline">s</math>'' ''<math display="inline">\in</math> <math display="inline">S</math>''. <br />
<br />
'''Model''' <br />
<br />
<math>\begin{matrix}\min(y)\sum_{s=1}^Sy_s \\ \ s.t. \sum_kN_{ks}y_s\geq J_k \forall k\in K \\ y_s\in \Zeta_+\forall s\in S \end{matrix}<br />
<br />
</math><br />
<br />
''<u>Solving the problem</u>''<br />
<br />
The model assumes the availability of the set ''<math display="inline">K</math>'' and the parameters <math display="inline">N_{k,s}</math> . To generate this data, you would have to list all possible cutting patterns. However, the number of possible cutting patterns is a big number. This is why a direct implementation of the model above is not practical in real-world problems. In this case is when it makes sense to solve the continuous relaxation of the above model. This is because, in reality, the demand figures are so high that the number of bars to cut is also a large number, and therefore a good solution can be determined by rounding up to the next integer each variable <math>y_s<br />
<br />
</math>found by solving the continuous relaxation. In addition to that, the solution of the relaxed problem will become the starting point for the application of an exact solution method (for instance, the Branch-and Bound).<blockquote><u>''Key take-away: In the next steps of this example we will analyze how to solve the continuous relaxation of the model.''</u></blockquote>As a starting point, we need any feasible solution. Such a solution can be constructed as follows:<br />
<br />
# We consider any single-item cutting patterns, i.e., <math>\|K\|<br />
<br />
</math> configurations, each containing <math display="inline">{\textstyle N_{k,s} } = \llcorner \frac{W}{L_k}\lrcorner<br />
<br />
</math> pieces of type <math>k<br />
<br />
</math>;<br />
# Set <math display="inline">{\textstyle y_{k}} = \llcorner \frac{R_s}{N_{k,s}}\lrcorner<br />
<br />
</math> for pattern <math>k<br />
<br />
</math> (where pattern <math>k<br />
<br />
</math> is the pattern containing only pieces of type <math>k<br />
<br />
</math>).<br />
<br />
This solution could also be arrived to by applying the simplex method to the model (without integrality constraints), considering only the decision variables that correspond to the above single-item patterns: <br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y_{1}+y_{2}+y_{3}+y_{4}+y_{5}\\<br />
\text{s.t} & ~~ 15y_{1} \ge 144\\<br />
\ & ~~ 6y_{2} \ge 105\\<br />
\ & ~~ 6y_{3} \ge 72\\<br />
\ & ~~ 6y_{4} \ge 30\\<br />
\ & ~~ 3y_{5} \ge 24\\<br />
\ & ~~ y_{1},y_{2},y_{3},y_{4},y_{5} \ge 0\\<br />
\end{align}</math><br />
<br />
In fact, if we solve this problem (for example, use CPLEX solver in GAMS) the solution is as below: <br />
{| class="wikitable"<br />
|Y1<br />
|28.8<br />
|-<br />
|Y2<br />
|52.5<br />
|-<br />
|Y3<br />
|24<br />
|-<br />
|Y4<br />
|15<br />
|-<br />
|Y5<br />
|24<br />
|}<br />
Next, a new possible pattern (number <math>6</math>) will be considered. This pattern contains only one piece of item type number <math>5</math>. So the question is if the new solution would remain optimal if this new pattern was allowed. Duality helps answer ths question. At every iteration of the simplex method, the outcome is a feasible basic solution (corresponding to some basis <math>B</math>) for the primal problem and a dual solution (the multipliers <math>u^{t}=c^{t}BB^{-1}</math>) that satisfy the complementary slackness conditions. (Note: the dual solution will be feasible only when the last iteration is reached) <br />
<br />
The inclusion of new pattern <math>6</math> corresponds to including a new variable in the primal problem, with objective cost <math>1</math> (as each time pattern <math>6</math> is chosen, one bar is cut) and corresponding to the following column in the constraint matrix: <br />
<br />
<math>D_6= \begin{bmatrix}<br />
\ 1 \\ <br />
\ 0 \\ <br />
\ 0 \\ <br />
\ 0 \\ <br />
\ 1 \\ <br />
\end{bmatrix}</math><br />
<br />
<br />
These variables create a new dual constraint. We then have to check if this new constraint is violated by the current dual solution (or in other words, ''if the reduced cost of the new variable with respect to basis <math>B</math> is negative)''<br />
<br />
The new dual constraint is:<math>1\times u_{1}+0\times u_{2}+0\times u_{3}+0\times u_{4}+1\times u_{5}\leq 1</math><br />
<br />
The solution for the dual problem can be computed in different software packages, or by hand. The example below shows the solution obtained with GAMS for this example:<br />
<br />
(Note the solution for the dual problem would be: <math>u=c_{T}^{B}B^{-1}</math>)<br />
<br />
<br />
{| class="wikitable"<br />
|Dual variable<br />
|Variable value<br />
|-<br />
|D1<br />
|0.067<br />
|-<br />
|D2<br />
|0.167<br />
|-<br />
|D3<br />
|0.167<br />
|-<br />
|D4<br />
|0.167<br />
|-<br />
|D5<br />
|0.333<br />
|}<br />
Since <math>0.2+1=1.2> 1</math>, the new constraint is violated.<br />
<br />
This means that the current primal solution (in which the new variable is <math>y_{6}=0</math>) may not be optimal anymore (although it is still feasible). The fact that the dual constraint is violated means the associated primal variable has negative reduced cost: <br />
<br />
the norm of <math>c_6 = c_6-u^TD_6=1-0.4=0.6</math> <br />
<br />
To help us solve the problem, the next step is to let <math>y_{6}</math> enter the basis. To do so, we modify the problem by inserting the new variable as below:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y_{1}+y_{2}+y_{3}+y_{4}+y_{5}+y_{6}\\<br />
\text{s.t} & ~~ 15y_{1} +y_{6}\ge 144\\<br />
\ & ~~ 6y_{2} \ge 105\\<br />
\ & ~~ 6y_{3} \ge 72\\<br />
\ & ~~ 6y_{4} \ge 30\\<br />
\ & ~~ 3y_{5}+y_{6} \ge 24\\<br />
\ & ~~ y_{1},y_{2},y_{3},y_{4},y_{5},y_{6} \ge 0\\<br />
\end{align}</math><br />
<br />
<br />
If this problem is solved with the simplex method, the optimal solution is found, but restricted only to patterns <math>1</math> to <math>6</math>. If a new pattern is available, a decision should be made whether this new pattern should be used or not by proceeding as above. However, the problem is how to find a pattern (i.e., a variable; i.e, a column of the matrix) whose reduced cost is negative (i.e., which will mean it is convenient to include it in the formulation). At this point one can notice that number of possible patterns exponentially large,and all the patterns are not even known explicitly. The question then is:<br />
<br />
''Given a basic optimal solution for the problem in which only some variables are included, how can we find (if any exists) a variable with negative reduced cost (i.e., a constraint violated by the current dual solution)?'' <br />
<br />
This question can be transformed into an optimization problem: in order to see whether a variable with negative reduced cost exists, we can look for the minimum of the reduced costs of all possible variables and check whether this minimum is negative:<br />
<br />
<math>\bar{c}=1-u^Tz</math><br />
<br />
Because every column of the constraint matrix corresponds to a cutting pattern, and every entry of the column says how many pieces of a certain type are in that pattern. In order for <math>z<br />
<br />
</math> to be a possible column of the constraint matrix, the following condition must be satisfied:<br />
<br />
<math display="inline">\begin{matrix}z_k\in \Zeta_+\forall k\in K \\ \ \sum_kL_kz_k \leq M \end{matrix}<br />
<br />
</math><br />
<br />
And by so doing, it enables the conversion of the problem of finding a variable with negative reduced cost into the integer linear programming problem below:<br />
<br />
<math>\begin{matrix}\min\ \bar{c} = 1 - sum_{k=1}^K u_k \times z_k \\ \ s.t. \sum_kL_kz_k \leq M \\ z_k\in \Zeta_+\forall k\in K \end{matrix}<br />
<br />
</math><br />
<br />
which, in turn, would be equivalent to the below formulation (we just write the objective in maximization form and ignore the additive constant <math>1</math>):<br />
<br />
<math>\begin{matrix} \max\sum_{k=1}^K u_k \times z_k \\ \ s.t. \sum_kL_kz_k \leq M \\ z_k\in \Zeta_+\forall k\in K \end{matrix}</math><br />
<br />
<br />
<br />
The coefficients <math>z_k<br />
<br />
</math> of a column with negative reduced cost can be found by solving the above integer [[wikipedia:Knapsack_problem|"knapsack"]] problem (which is a traditional type of problem that we find in integer programming).<br />
<br />
In our example, if we start from the problem restricted to the five single-item patterns, the above problem reads as:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ 0.067z_{1}+0.167z_{2}+0.167z_{3}+0.167z_{4}+z_{5}\\<br />
\text{s.t} & ~~ 6z_{1} +13.5z_{2}+15z_{3}+16.5z_{4}+22.5z_{5}\le 33\\<br />
\ & ~~ z_{1},z_{2},z_{3},z_{4},z_{5}\ge 0\\<br />
\end{align}</math><br />
<br />
<br />
which has the following optimal solution: <math>z^T= [1 \quad 0\quad 0\quad 0\quad 1]</math><br />
<br />
This matches the pattern we called <math>D6</math>, earlier on in this page.<br />
<br />
<br />
<u>Optimality test</u><br />
<br />
If : <math display="inline">\sum_{k=1}^{K}z_{k}^{*}u_{k}^{*}\leq 1</math><br />
<br />
then <math>y^*</math> is an optimal solution of the full continuous relaxed problem (that is, including all patterns in ''<math display="inline">S</math>'')<br />
<br />
If this condition is not true, we go ahead and update the master problem by including in ''<math display="inline">S^'</math>'' the pattern <math>\lambda</math> defined by <math>N_{s,\lambda}</math> (in practical terms this means that the column '''<math>y^*</math>''' needs to be included in the constraint matrix)<br />
<br />
For this example we find that the optimality test is met as <math>\sum_{k=1}^{K}z_{k}^{*}u_{k}^{*}=0.4 \leq 1</math> so we have have found an optimal solution of the relaxed continuous problem (if this was not the case we would have had to go back to reformulating and solving the master problem, as discussed in the methodology section of this page) <br />
<br />
<br />
<br />
<br />
'''''Algorithm discussion'''''<br />
<br />
The column generation subproblem is the critical part of the method is generating the new columns. It is not reasonable to compute the reduced costs of all variables <math>y_s<br />
<br />
</math> for <math>s=1,...,S</math>, otherwise this procedure would reduce to the simplex method. In fact, n<math>n</math> can be very large (as in the cutting-stock problem) or, for some reason, it might not be possible or convenient to enumerate all decision variables. This is when it would be necessary to study a specific column generation algorithm for each problem; ''only if such an algorithm exists (and is practical)'', the method can be fully applied. In the one-dimensional cutting stock problem, we transformed the column generation subproblem into an easily solvable integer linear programming problem. In other cases, the computational effort required to solve the subproblem is too high, such that appying this full procedure becomes unefficient.<br />
<br />
== Applications ==<br />
As previously mentioned, column generation techniques are most relevant when the problem that we are trying to solve has a high ratio of number of variables with respect to the number of constraints. As such some common applications are:<br />
<br />
* Bandwith packing<br />
* Bus driver scheduling<br />
* Generally, column generation algorithms are used for large delivery networks, often in combination with other methods, helping to implement real-time solutions for on-demand logistics. We discuss a supply chain scheduling application below. <br />
<br />
'''''Bandwidth packing''''' <br />
<br />
The objective of this problem is to allocate bandwidth in a telecommunications network to maximize total revenue. The routing of a set of traffic demands between different users is to be decided, taking into account the capacity of the network arcs and the fact that the traffic between each pair of users cannot be split The problem can be formulated as an integer programming problem and the linear programming relaxation solved using column generation and the simplex algorithm. A branch and bound procedure which branches upon a particular path is used in this particular paper<ref name=":3">Parker, Mark & Ryan, Jennifer. (1993). A column generation algorithm for bandwidth packing. Telecommunication Systems. 2. 185-195. 10.1007/BF02109857. </ref> that looks into bandwidth routing, to solve the IP. The column generation algorithm greatly reduces the complexity of this problem. <br />
<br />
'''''Bus driver scheduling'''''<br />
<br />
Bus driver scheduling aims to find the minimum number of bus drivers to cover a published timetable of a bus company. When scheduling bus drivers, contractual working rules must be enforced, thus complicating the problem. A column generation algorithm can decompose this complicated problem into a master problem and a series of pricing subproblems. The master problem would select optimal duties from a set of known feasible duties, and the pricing subproblem would augment the feasible duty set to improve the solution obtained in the master problem.<ref name=":2">Dung‐Ying Lin, Ching‐Lan Hsu. Journal of Advanced Transportation. Volume50, Issue8, December 2016, Pages 1598-1615. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/atr.1417</ref><br />
<br />
'''''Supply Chain scheduling problem'''''<br />
<br />
A typical application is where we consider the problem of scheduling a set of shipments between different nodes of a supply chain network. Each shipment has a fixed departure time, as well as an origin and a destination node, which, combined, determine the duration of the associated trip. The aim is to schedule as many shipments as possible, while also minimizing the number of vehicles utilized for this purpose. This problem can be formulated by an integer programming model and an associated branch and price solution algorithm. The optimal solution to the LP relaxation of the problem can be obtained through column generation, solving the linear program a huge number of variables, without explicitly considering all of them. In the context of this application, the master problem schedules the maximum possible number of shipments using only a small set of vehicle-routes, and a column generation (colgen) sub-problem would generate cost-effective vehicle-routes to be fed fed into the master problem. After finding the optimal solution to the LP relaxation of the problem, the algorithm would branch on the fractional decision variables (vehicle-routes), in order to reach the optimal integer solution.<ref name=":1">Kozanidis, George. (2014). Column generation for scheduling shipments within a supply chain network with the minimum number of vehicles. OPT-i 2014 - 1st International Conference on Engineering and Applied Sciences Optimization, Proceedings. 888-898</ref><br />
<br />
== Conclusions ==<br />
Column generation is a way of starting with a small, manageable part of a problem (specifically, with some of the variables), solving that part, analyzing that interim solution to find the next part of the problem (specifically, one or more variables) to add to the model, and then solving the full or extended model. In the column generation method, the algorithm steps are repeated until an optimal solution to the entire problem is achieved.<ref> ILOG CPLEX 11.0 User's Manual > Discrete Optimization > Using Column Generation: a Cutting Stock Example > What Is Column Generation? 1997-2007. URL:http://www-eio.upc.es/lceio/manuals/cplex-11/html/usrcplex/usingColumnGen2.html#:~:text=In%20formal%20terms%2C%20column%20generation,method%20of%20solving%20the%20problem.&text=By%201960%2C%20Dantzig%20and%20Wolfe,problems%20with%20a%20decomposable%20structure</ref><br />
<br />
This algorithm provides a way of solving a linear programming problem adding columns (corresponding to constrained variables) during the pricing phase of the problem solving phase, that would otherwise be very tedious to formulate and compute. Generating a column in the primal formulation of a linear programming problem corresponds to adding a constraint in its dual formulation.<br />
<br />
== References ==</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Newsvendor_problem&diff=2732Newsvendor problem2020-12-21T11:36:48Z<p>Wc593: </p>
<hr />
<div>Authors: Morgan McCormick (mm3237), Brittany Yesner (by286), Daniel Aronson (da523), John Bednarek (jwb389) (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
The mathematical application for the Newsvendor Problem dates back to 1888, when Francis Ysidro Edgeworth used the central limit theorem to find the optimal cash reserves needed to satisfy various withdrawals from depositors.<ref>F. Y. Edgeworth (1888). "The Mathematical Theory of Banking". Journal of the Royal Statistical Society.</ref> The namesake for the problem comes from Morse and Kimball's book from 1951, where they used the term “newsboy” to describe this specific problem.<ref>R. R. Chen; T.C.E. Cheng; T.M. Choi; Y. Wang (2016). "Novel Advances in Applications of the Newsvendor Model". Decision Sciences.</ref> Also referred to as “newsboy problem”, it is named by analogy with the situation faced by a newspaper vendor who must decide how many copies of the day's paper to stock in the face of uncertain demand and knowing that unsold copies will be worthless at the end of the day. <br />
<br />
T.M Whitin in 1955 was the first to consider not only the cost minimization portion of the problem, but also the profit maximization.<ref>Whitin, T. M. “Inventory Control and Price Theory.” Management Science, vol. 2, no. 1, 1955, pp. 61–68.</ref> To do so he formulated a newsvendor model with price effects, where the selling price and stocking quantity are set simultaneously. He then adapted his model to include a probability distribution for demand as a function of the selling price, therefore making the price of the product a decision variable rather than an assigned variable. <br />
<br />
In general, this model can be used in any application with a perishable good and unknown, randomized demand. <br />
<br />
== Description ==<br />
The newsvendor model is a model used to determine the optimal inventory levels in operations management and applied economic applications. The assumptions for this problem usually include fixed prices and uncertain demands for perishable products with limited availability. In this model, any unit of demand, ''R'', over the current inventory level, x, is identified as a lost sale.<br />
<br />
== Formulation ==<br />
<br />
=== Overview ===<br />
To formulate a standard newsvendor problem to determine profit, the function is <math display="inline">E[profit] = E[s * min(x, R)] - wx </math> . In the formulation, ''s'' represents the price a unit is sold for, x represents the number of units in inventory that the vendor ordered, ''R'' is a random variable representing a probability distribution for the demand a given day, and ''w'' is the wholesale cost for the vendor to purchase materials. The goal is to optimize the profit to be a maximum. This is achieved by maximizing the amount of inventory on hand to be able to sell while also minimizing the amount of unsold inventory that is void or considered perishable at the end of the day. The salvage cost for any unsold inventory at the end of the sales period is represented by ''v.''<br />
<br />
The balance of being understocked and losing potential sales with the potential loss from being overstocked can be represented by the critical fractal. This is illustrated by the formula <math>n=F^{-1} ({s-w \over s})</math> where ''F<sup>-1</sup>'' is the inverse of the cumulative distribution function of R.<ref name=":0">"Newsvendor Model.” Wikipedia, Wikimedia Foundation, 12 Nov. 2020, en.wikipedia.org/wiki/Newsvendor_model.<br />
</ref><sup>,</sup><ref>Yan Qin, Ruoxuan Wang, Asoo J. Vakharia, Yuwen Chen, Michelle M.H. Seref, “The newsvendor problem: Review and directions for future research.” European Journal of Operational Research. Volume 213, Issue 2. 2011. Pages 361-374, ISSN 0377-2217. <nowiki>https://doi.org/10.1016/j.ejor.2010.11.024</nowiki>.</ref><br />
<br />
=== Detailed Solution Steps ===<br />
In formulation, a newsboy could purchase a given number of newspapers x one morning for a given wholesale bulk cost, ''b''. The selling price and salvage values are known constants ''s'' and ''v,'' respectively, and the demand is given by ''D''. The overage cost is c<sup>o</sup> for the cost of ordering one unit too many. The cost of ordering one unit too few is the cost of underage, c<sup>u</sup>. <br />
<br />
The activity variables are ''D(ω)'', the realization of random demand which is assumed to be continuous; ''p(ω)'', the probability of outcome ω; ''S<sup>0</sup>(ω)'', the overage which is equal to <math>[x - D(\omega)]^+</math>; and the underage ''S<sup>u</sup>(ω)'' which is equal to <math>[D(\omega)-x]^+</math>. <br />
<br />
To calculate the '''wholesale cost per newspaper''', ''w,'' the formula <math display="inline">w = b/x</math> is used. <br />
<br />
The '''marginal profit''', or net profit for the newsvendor per unit, ''m'' is found by the formula <math>m = s - w</math>. <br />
<br />
The '''marginal loss''', or loss for each unsold unit, ''l'' is found using the formula <math>l = w - v</math>. <br />
<br />
The '''profit''', ''P'', is calculated by <math>P = m * x</math> if every item in inventory was sold. <br />
<br />
The '''expected profit''', ''E'', taking into account a given demand probability is calculated by <math>E = x * D * m</math> if every item in inventory is sold. <br />
<br />
The objective function can be represented as <math>F(x,\omega)=c^o S^o (x,\omega) + c^u S^u (x,\omega)<br />
= c^o [x-d(\omega)]^+ + c^u [D(\omega)-x]^+ </math><math>F(x) = E[F(x,\omega)]<br />
= \int (c^o [x-D(\omega)]^+ + c^u [D(\omega) - x]^+ ) p(\omega)d\omega</math><br />
<br />
where the goal is to solve for <math>min_x F(x) = E [F(x,\omega)]</math>.<br />
<br />
== Numerical Example ==<br />
A historically relevant example of the newsvendor problem would be the working conditions that led to the newsboy strike of 1899 and subsequent labor movements. <br />
<br />
In the late nineteenth century and before the Spanish-American War, newsboys in New York City could purchase 100 newspapers for 50 cents and sell the newspapers for 8 cents each. If a paper didn’t sell, assume the publisher would buy the newspaper back at 60% cost.<ref name=":1">“Labor History Lesson: The ‘Newsies’ Strike.” Labor History Lesson: The "Newsies" Strike | AFT Connecticut, 25 May 2016, aftct.org/story/labor-history-lesson-newsies-strike</ref> <br />
<br />
Assume the newspaper sales in New York City followed the following demand schedule: <br />
{| class="wikitable"<br />
|+Table 1: Demand in New York City<br />
!Quantity<br />
!Probability of Demand<br />
|-<br />
|700<br />
|0.450<br />
|-<br />
|800<br />
|0.300<br />
|-<br />
|900<br />
|0.220<br />
|-<br />
|1000<br />
|0.015<br />
|-<br />
|1100<br />
|0.010<br />
|}<br />
The '''wholesale cost price''' of the newspapers is $0.05/100 = $0.005 per newspaper.<br />
<br />
The '''selling price''' of the newspapers is $0.08 per newspaper. <br />
<br />
The '''salvage value''' of the newspapers is $0.003 per newspaper.<br />
<br />
The '''marginal profit''' is equal to $0.08 - $0.005 = $0.075 per additional newspaper sold.<br />
<br />
The '''marginal loss''' is equal to $0.005 - $0.003 = $0.002 per unsold newspaper. <br />
<br />
<math>c^o = $0.005 - $0.005(0.6) = $0.002</math> per unit<br />
<br />
<math>c^u = $0.08</math> per unit<br />
<br />
x = purchase quantity, where <math>x \in (700, 800, 900, 1000, 1100)</math><br />
<br />
<math>S^o (\omega) = x - \omega, x > \omega</math><br />
<br />
<math>S^u (\omega) = \omega - x, x< \omega</math><br />
<br />
<math>S^o (\omega) = S^u (\omega), x = \omega</math><br />
<br />
<math>F(x,\omega) = loss function</math><br />
<br />
<math>F(x,\omega) = c^o S^o (x, \omega) + c^u s^u (x, \omega)</math><br />
<br />
<math>F(x,\omega) = c^o (x-\omega) + c^u (\omega -x)</math><br />
<br />
<math>F(x,\omega) = (0.002)(x- \omega)+(0.008)(\omega-x)</math><br />
<br />
<math>R(x,\omega) = 0.08\omega - F(x,\omega)</math><br />
{| class="wikitable"<br />
|+Table 2: Tabulated Values<br />
!Purchase Quantity (x)<br />
!Units Sold (ω)<br />
!Loss (F(x, ω))<br />
!Probability of Demand (p(ω))<br />
!Profit (ω*0.08)<br />
!Revenue (Profit - Loss)<br />
!Probability of Revenue<br />
!Expected Revenue for Purchasing x<br />
|-<br />
| rowspan="5" |700<br />
|700<br />
|0<br />
|0.45<br />
|56<br />
|56<br />
|25.2<br />
| rowspan="5" |55.75<br />
|-<br />
|800<br />
|8<br />
|0.3<br />
|64<br />
|56<br />
|16.8<br />
|-<br />
|900<br />
|16<br />
|0.22<br />
|72<br />
|56<br />
|12.32<br />
|-<br />
|1000<br />
|24<br />
|0.015<br />
|80<br />
|56<br />
|0.84<br />
|-<br />
|1100<br />
|32<br />
|0.01<br />
|88<br />
|56<br />
|0.56<br />
|-<br />
| rowspan="5" |800<br />
|700<br />
|0.2<br />
|0.45<br />
|56<br />
|55.8<br />
|25.11<br />
| rowspan="5" |59.99<br />
|-<br />
|800<br />
|0<br />
|0.3<br />
|64<br />
|64<br />
|19.2<br />
|-<br />
|900<br />
|8<br />
|0.22<br />
|72<br />
|64<br />
|14.08<br />
|-<br />
|1000<br />
|16<br />
|0.015<br />
|80<br />
|64<br />
|0.96<br />
|-<br />
|1100<br />
|24<br />
|0.01<br />
|88<br />
|64<br />
|0.64<br />
|-<br />
| rowspan="5" |900<br />
|700<br />
|0.4<br />
|0.45<br />
|56<br />
|55.6<br />
|25.02<br />
| rowspan="5" |61.08<br />
|-<br />
|800<br />
|0.2<br />
|0.3<br />
|64<br />
|63.8<br />
|19.14<br />
|-<br />
|900<br />
|0<br />
|0.22<br />
|72<br />
|72<br />
|15.84<br />
|-<br />
|1000<br />
|8<br />
|0.015<br />
|80<br />
|72<br />
|1.08<br />
|-<br />
|1100<br />
|88<br />
|0.01<br />
|88<br />
|0<br />
|0<br />
|-<br />
| rowspan="5" |1000<br />
|700<br />
|0.6<br />
|0.45<br />
|56<br />
|55.4<br />
|24.93<br />
| rowspan="5" |61.806<br />
|-<br />
|800<br />
|0.4<br />
|0.3<br />
|64<br />
|63.6<br />
|19.08<br />
|-<br />
|900<br />
|0.2<br />
|0.22<br />
|72<br />
|71.8<br />
|15.796<br />
|-<br />
|1000<br />
|0<br />
|0.015<br />
|80<br />
|80<br />
|1.2<br />
|-<br />
|1100<br />
|8<br />
|0.01<br />
|88<br />
|80<br />
|0.8<br />
|-<br />
| rowspan="5" |1100<br />
|700<br />
|0.8<br />
|0.45<br />
|56<br />
|55.2<br />
|24.84<br />
| rowspan="5" |61.689<br />
|-<br />
|800<br />
|0.6<br />
|0.3<br />
|64<br />
|63.4<br />
|19.02<br />
|-<br />
|900<br />
|0.4<br />
|0.22<br />
|72<br />
|71.6<br />
|15.752<br />
|-<br />
|1000<br />
|0.2<br />
|0.015<br />
|80<br />
|79.8<br />
|1.197<br />
|-<br />
|1100<br />
|0<br />
|0.01<br />
|88<br />
|88<br />
|0.88<br />
|}<br />
<br />
<br />
The optimal quantity to purchase is 1000 in order to minimize expected loss and maximize expected revenue.<br />
<br />
== Demand Distributions ==<br />
The newsvendor problem can be solved in a multitude of ways, the one uncertainty that always exists is the number of papers needed to fully maximize the profits. This can be estimated by a variety of ways, but most commonly there are uniform, normal, or lognormal distributions.<br />
<br />
The uniform distribution estimates the probability to not change. In the case of the newspaper problem this would mean that the demand for a newspaper does not vary from day to day. This method can pose issues as the demand for papers can vary from days like Monday or Tuesday, to days like Sunday which historically have been a day recognized as always having a paper.<br />
<br />
The next method that can be used to estimate the demand of a paper can be done using a normal distribution. A normal distribution’s standard deviation positions the curve of demand into being one that can be used to calculate the different demands that a salesman may face amongst the sales of a paper. The normal distribution allocates variations that enable the salesman to take calculated risks based on historical norms. These norms provide contextual evidence to accurately account for the demand that the seller may see.<br />
<br />
While a normal distribution can provide estimates into how many papers may need to be printed for the public, it does not take into account the potential profit or loss that the vendor may undertake. The logarithmic method will show at what point the salesman optimal peak profit will be. The logarithmic curve is exponential and will ultimately determine the peak profit and printing point at which the business will succeed. This solution is meant to determine the optimal solution from a profit standpoint.<ref name=":0" /><sup>,</sup><ref name=":1" /><br />
<br />
== Applications ==<br />
Beyond the namesake example of the newsvendor problem, the newsvendor problem model can be applied to a variety of other discrete optimization problems. <br />
<br />
=== Personal Investments ===<br />
The tradeoff between tying funds up in a stock against holding cash reserves follows the model of the newsvendor problem because putting too much much of your money in stocks could lead to having to sell stocks undervalue to free up cash while holding too much money in cash reserves could lead to money that is under performing. The newsvendor problem can help investors find an optimal way to allow to minimize risk while allowing enough opportunity to create a large gain. With recent trends of market volatility, evaluating cash positions and market exposure has become ever more important.<ref name=":2">Birge, J. and Louveaux, F. Introduction to Stochastic Programming, Springer, 2011.</ref><br />
<br />
=== Emergency Resources ===<br />
The amount of emergency resources to hold on hand follows the model of the newsvendor problem because holding too many emergency resources could mean throwing out expensive inventory if there is no emergency while not having enough emergency resources could be disastrous in times of peril. Emergencies have the same tendencies of an unknown market. The first responders need to have an optimal amount of supplies to maximize their effectiveness. If items that are perishable are sent in mass quantities, it can bog down the supply lines and lead to important resources becoming expired.<ref name=":2" /> <br />
<br />
=== Manufacturing ===<br />
The amount of units of a good to manufacture follows the model of the newsvendor problem because while overproduction would always meet demand, production costs increase and storage costs are introduced for the excess inventory. Manufacturers and wholesalers often rely on razor thin margins. By understanding how to limit excess storage and money that it puts out into the materials themselves the business can find an accurate way of maximizing the cash flow. Inventory is often one of the crippling factors of a business. Businesses often can save money on individual units by producing larger quantities, but this ultimately eats away at having a strong cash position to address the concerns of a changing market.<ref name=":2" /> <br />
<br />
=== Real Estate ===<br />
House pricing in the real estate market follows the model of the newsvendor problem because if a house is priced too high it will take too long to sell and if the house is priced too low it will sell quickly but at lower price. The housing market is another investment that is exposed to a great deal of volatility and increased market risk. Markets can change rapidly from economic situations to also the crime, schools, and locations around a property. By understanding the market norms, one can find the adequate pricing method for a home using the newsvendor problem algorithm. Appraisers and realtors must focus on understanding these metrics to ensure the estimates are accurate.<ref name=":2" /><br />
<br />
== Conclusion ==<br />
The newsboy formulation is used to optimize the amount of profit while minimizing the excess materials that hold no value after a given period of time. This formulation can be adapted for different probabilities and distributions of expected sales. Additionally, nuances such as accounting for a salvage price for unsold perishable goods can also be added to the problem for added complexity to mimic a given situation. From that, the salesperson can determine how many of a perishable product should be purchased for resale at a given time in order to optimize their profits.<br />
<br />
== References ==</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Set_covering_problem&diff=2731Set covering problem2020-12-21T11:36:18Z<p>Wc593: </p>
<hr />
<div>Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
<br />
The set covering problem is a significant NP-hard problem in combinatorial optimization. Given a collection of elements, the set covering problem aims to find the minimum number of sets that incorporate (cover) all of these elements. <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref><br />
<br />
The set covering problem importance has two main aspects: one is pedagogical, and the other is practical. <br />
<br />
First, because many greedy approximation methods have been proposed for this combinatorial problem, studying it gives insight into the use of approximation algorithms in solving NP-hard problems. Thus, it is a primal example in teaching computational algorithms. We present a preview of these methods in a later section, and we refer the interested reader to these references for a deeper discussion. <ref name="one" /> <ref name="seven"> P. Slavı́k, [https://www.sciencedirect.com/science/article/abs/pii/S0196677497908877 "A Tight Analysis of the Greedy Algorithm for Set Cover]," ''Journal of Algorithms,'', vol. 25, pp. 237-245, 1997. </ref> <ref name="nine"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "What Is the Best Greedy-like Heuristic for the Weighted Set Covering Problem?]," ''Operations Research Letters'', vol. 44, pp. 366-369, 2016. </ref><br />
<br />
Second, many problems in different industries can be formulated as set covering problems. For example, scheduling machines to perform certain jobs can be thought of as covering the jobs. Picking the optimal location for a cell tower so that it covers the maximum number of customers is another set covering application. Moreover, this problem has many applications in the airline industry, and it was explored on an industrial scale as early as the 1970s. <ref name="two"> J. Rubin, [https://www.jstor.org/stable/25767684?seq=1 "A Technique for the Solution of Massive Set Covering Problems, with Application to Airline Crew Scheduling]," ''Transportation Science'', vol. 7, pp. 34-48, 1973. </ref><br />
<br />
== Problem formulation ==<br />
In the set covering problem, two sets are given: a set <math> U </math> of elements and a set <math> S </math> of subsets of the set <math> U </math>. Each subset in <math> S </math> is associated with a predetermined cost, and the union of all the subsets covers the set <math> U </math>. This combinatorial problem then concerns finding the optimal number of subsets whose union covers the universal set while minimizing the total cost.<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref><br />
<br />
The mathematical formulation of the set covering problem is define as follows. We define <math> U </math> = { <math> u_i,..., u_m </math>} as the universe of elements and <math> S </math> = { <math> s_i,..., s_n </math>} as a collection of subsets such that <math> s_i \subset U </math> and the union of <math> s_i</math> covers all elements in <math> U </math> (i.e. <math>\cup</math><math> s_i</math> = <math> U </math> ). Addionally, each set <math> s_i</math> must cover at least one element of <math> U </math> and has associated cost <math> c_i</math> such that <math> c_i > 0</math>. The objective is to find the minimum cost sub-collection of sets <math> X </math> <math>\subset</math> <math> S </math> that covers all the elements in the universe <math> U </math>.<br />
<br />
== Integer linear program formulation ==<br />
An integer linear program (ILP) model can be formulated for the minimum set covering problem as follows:<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref><br />
<br />
'''Decision variables'''<br />
<br />
<math> y_i = \begin{cases} 1, & \text{if subset }i\text{ is selected} \\ 0, & \text{otherwise } \end{cases}</math><br />
<br />
'''Objective function'''<br />
<br />
minimize <math>\sum_{i=1}^n c_i y_i</math> <br />
<br />
'''Constraints '''<br />
<br />
<math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> <br />
<br />
<math> y_i \in \{0, 1\}, \forall i = 1,....,n</math> <br />
<br />
The objective function <math>\sum_{i=1}^n c_i y_i</math> is defined to minimize the number of subset <math> s_i</math> that cover all elements in the universe by minimizing their total cost. The first constraint implies that every element <math> i </math> in the universe <math> U </math> must be be covered and the second constraint <math> y_i \in \{0, 1\} </math> indicates that the decision variables are binary which means that every set is either in the set cover or not.<br />
<br />
Set covering problems are significant NP-hard optimization problems, which implies that as the size of the problem increases, the computational time to solve it increases exponentially. Therefore, there exist approximation algorithms that can solve large scale problems in polynomial time with optimal or near-optimal solutions. In subsequent sections, we will cover two of the most widely used approximation methods to solve set cover problem in polynomial time which are linear program relaxation methods and classical greedy algorithms. <ref name="seven" /><br />
<br />
== Approximation via LP relaxation and rounding ==<br />
Set covering is a classical integer programming problem and solving integer program in general is NP-hard. Therefore, one approach to achieve an <math> O</math>(log<math>n</math>) approximation to set covering problem in polynomial time is solving via linear programming (LP) relaxation algorithms <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref>. In LP relaxation, we relax the integrality requirement into a linear constraints. For instance, if we replace the constraints <math> y_i \in \{0, 1\}</math> with the constraints <math> 0 \leq y_i \leq 1 </math>, we obtain the following LP problem that can be solved in polynomial time:<br />
<br />
minimize <math>\sum_{i=1}^n c_i y_i</math> <br />
<br />
subject to <math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> <br />
<br />
<math> 0 \leq y_i\leq 1, \forall i = 1,....,n</math><br />
<br />
The above LP formulation is a relaxation of the original ILP set cover problem. This means that every feasible solution of the integer program is also feasible for this LP program. Additionally, the value of any feasible solution for the integer program is the same value in LP since the objective functions of both integer and linear programs are the same. Solving the LP program will result in an optimal solution that is a lower bound for the original integer program since the minimization of LP finds a feasible solution of lowest possible values. Moreover, we use LP rounding algorithms to directly round the fractional LP solution to an integral combinatorial solution as follows:<br />
<br><br />
<br />
<br />
'''Deterministic rounding algorithm''' <br />
<br><br />
<br />
Suppose we have an optimal solution <math> z^* </math> for the linear programming relaxation of the set cover problem. We round the fractional solution <math> z^* </math> to an integer solution <math> z </math> using LP rounding algorithm. In general, there are two approaches for rounding algorithms, deterministic and randomized rounding algorithm. In this section, we will explain the deterministic algorithms. In this approach, we include subset <math> s_i </math> in our solution if <math> z^* \geq 1/d </math>, where <math> d </math> is the maximum number of sets in which any element appears. In practice, we set <math> z </math> to be as follows:<ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref><br />
<br />
<math> z = \begin{cases} 1, & \text{if } z^*\geq 1/d \\ 0, & \text{otherwise } \end{cases}</math><br />
<br />
The rounding algorithm is an approximation algorithm for the set cover problem. It is clear that the algorithm converge in polynomial time and <math> z </math> is a feasible solution to the integer program.<br />
<br />
== Greedy approximation algorithm ==<br />
Greedy algorithms can be used to approximate for optimal or near-optimal solutions for large scale set covering instances in polynomial solvable time. <ref name="seven" /> <ref name="nine" /> The greedy heuristics applies iterative process that, at each stage, select the largest number of uncovered elements in the universe <math> U </math>, and delete the uncovered elements, until all elements are covered. <ref name="ten"> V. Chvatal, [https://pubsonline.informs.org/doi/abs/10.1287/moor.4.3.233 "Greedy Heuristic for the Set-Covering Problem]," ''Mathematics of Operations Research'', vol. 4, pp. 233-235, 1979. </ref> Let <math> T </math> be the set that contain the covered elements, and <math> U </math> be the set that contain the elements of <math> Y </math> that still uncovered. At the beginning of the iteration, <math> T </math> is empty and all elements <math> Y \in U </math>. We iteratively select the set of <math> S </math> that covers the largest number of elements in <math> U </math> and add it to the covered elements in <math> T </math>. An example of this algorithm is presented below. <br />
<br />
'''Greedy algorithm for minimum set cover example: '''<br />
<br />
Step 0: <math> \quad </math> <math> T \in \Phi </math> <math> \quad \quad \quad \quad \quad </math> { <math> T </math> stores the covered elements }<br />
<br />
Step 1: <math> \quad </math> '''While''' <math> U \neq \Phi </math> '''do:''' <math> \quad </math> { <math> U </math> stores the uncovered elements <math> Y </math>}<br />
<br />
Step 2: <math> \quad \quad \quad </math> select <math> s_i \in S </math> that covers the highest number of elements in <math> U </math><br />
<br />
Step 3: <math> \quad \quad \quad </math> add <math> s_i </math> to <math> T </math><br />
<br />
Step 4: <math> \quad \quad \quad </math> remove <math> s_i </math> from <math> U </math><br />
<br />
Step 5: <math> \quad </math> '''End while''' <br />
<br />
Step 6: <math> \quad </math> '''Return''' <math> S </math><br />
<br />
==Numerical Example==<br />
Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8.<br />
<br />
We are given that camera location 1 covers stadium areas {1,3,4,6,7}, camera location 2 covers stadium areas {4,7,8,12}, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below:<br />
{| class="wikitable"<br />
|+Table 1 Camera Location vs Stadium Area<br />
|-<br />
!camera Location<br />
|1<br />
|2<br />
|3<br />
|4<br />
|5<br />
|6<br />
|7<br />
|8<br />
|-<br />
!stadium area<br />
|1,3,4,6,7<br />
|4,7,8,12<br />
|2,5,9,11,13<br />
|1,2,14,15<br />
|3,6,10,12,14<br />
|8,14,15<br />
|1,2,6,11<br />
|1,2,4,6,8,12<br />
|}<br />
<br />
We can then represent the above information using binary values. If the stadium area <math>i</math> can be covered with camera location <math>j</math>, then we have <math>y_{ij} = 1</math>. If not,<math>y_{ij} = 0</math>. For instance, stadium area 1 is covered by camera location 1, so <math>y_{11} = 1</math>, while stadium area 1 is not covered by camera location 2, so <math>y_{12} = 0</math>. The binary variables <math>y_{ij}</math> values are given in the table below: <br />
{| class="wikitable"<br />
|+Table 2 Binary Table (All Camera Locations and Stadium Areas)<br />
!<br />
!Camera1<br />
!Camera2<br />
!Camera3<br />
!Camera4<br />
!Camera5<br />
!Camera6<br />
!Camera7<br />
!Camera8<br />
|-<br />
!Stadium1<br />
|1<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium2<br />
|<br />
|<br />
|1<br />
|1<br />
|<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium3<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium4<br />
|1<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|1<br />
|-<br />
!Stadium5<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium6<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium7<br />
|1<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium8<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|-<br />
!Stadium9<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium10<br />
|<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium11<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|-<br />
!Stadium12<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|-<br />
!Stadium13<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium14<br />
|<br />
|<br />
|<br />
|1<br />
|1<br />
|1<br />
|<br />
|<br />
|-<br />
!Stadium15<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|<br />
|<br />
|}<br />
<br />
<br />
<br />
We introduce another binary variable <math>z_j</math> to indicate if a camera is installed at location <math>j</math>. <math>z_j = 1</math> if camera is installed at location <math>j</math>, while <math>z_j = 0</math> if not. <br />
<br />
Our objective is to minimize <math>\sum_{j=1}^8 z_j</math>. For each stadium, there’s a constraint that the stadium area <math>i</math> has to be covered by at least one camera location. For instance, for stadium area 1, we have <math>z_1 + z_4 + z_7 + z_8 \geq 1</math>, while for stadium 2, we have <math>z_3 + z_4 + z_7 + z_8 \geq 1</math>. All the 15 constraints that corresponds to 15 stadium areas are listed below:<br />
<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math> <br />
<br />
''s.t. Constraints 1 to 15 are satisfied:''<br />
<br />
<math> z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math> z_3 + z_4 + z_7 + z_8 \geq 1 (2)</math><br />
<br />
<math> z_1 + z_5 \geq 1 (3)</math><br />
<br />
<math> z_1 + z_2 + z_8 \geq 1 (4)</math><br />
<br />
<math> z_3 \geq 1 (5)</math><br />
<br />
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_3 \geq 1 (9)</math><br />
<br />
<math>z_5 \geq 1 (10)</math><br />
<br />
<math>z_3 + z_7 \geq 1 (11)</math><br />
<br />
<math>z_2 + z_5 + z_8 \geq 1 (12)</math><br />
<br />
<math>z_3 \geq 1 (13)</math><br />
<br />
<math>z_4 + z_5 + z_6 \geq 1 (14)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
From constraint {5,9,13}, we can obtain <math>z_3 = 1</math>. Thus we no longer need constraint 2 and 11 as they are satisfied when <math>z_3 = 1</math>. With <math>z_3 = 1</math> determined, the constraints left are:<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math>, <br />
<br />
s.t.:<br />
<br />
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math>z_1 + z_5 \geq 1 (3)</math><br />
<br />
<math>z_1 + z_2 + z_8 \geq 1 (4)</math><br />
<br />
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_5 \geq 1 (10)</math><br />
<br />
<math>z_2 + z_5 + z_8 \geq 1 (12)</math><br />
<br />
<math>z_4 + z_5 + z_6 \geq 1 (14)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
Now if we take a look at constraint <math>10. z_5 \geqslant 1</math> so <math>z_5</math> shall equal to 1. As <math>z_5 = 1</math>, constraint {3,6,12,14} are satisfied no matter what other <math>z</math> values are taken. If we also take a look at constraint 7 and 4, if constraint 4 will be satisfied as long as constraint 7 is satisfied since <math>z</math> values are nonnegative, so constraint 4 is no longer needed. The remaining constraints are:<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math><br />
<br />
s.t.:<br />
<br />
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
The next step is to focus on constraint 7 and 15. We can have at least 4 combinations of <math>z_1, z_2, z_4, z_6</math>values.<br />
<br />
<br />
<math>A: z_1 = 1, z_2 = 0, z_4 = 1, z_6 = 0</math><br />
<br />
<math>B: z_1 = 1, z_2 = 0, z_4 = 0, z_6 = 1</math><br />
<br />
<math>C: z_1 = 0, z_2 = 1, z_4 = 1, z_6 = 0</math><br />
<br />
<math>D: z_1 = 0, z_2 = 1, z_4 = 0, z_6 = 1</math><br />
<br />
<br />
We can then discuss each combination and determine <math>z_7, z_8</math>values for constraint 1 and 8 to be satisfied.<br />
<br />
<br />
Combination <math>A</math>: constraint 1 already satisfied, we need <math>z_8 = 1</math> to satisfy constraint 8.<br />
<br />
Combination <math>B</math>: constraint 1 already satisfied, constraint 8 already satisfied.<br />
<br />
Combination <math>C</math>: constraint 1 already satisfied, constraint 8 already satisfied.<br />
<br />
Combination <math>D</math>: we need <math>z_7 = 1</math> or <math>z_8 = 1</math> to satisfy constraint 1, while constraint 8 already satisfied.<br />
<br />
Our final step is to compare the four combinations. Since our objective is to minimize <math>\sum_{j=1}^8 z_j</math> and combinations <math>B</math> and <math>C</math> require the least amount of <math>z_j</math> to be 1, they are the optimal solutions.<br />
<br />
To conclude, our two solutions are:<br />
<br />
<math>Solution 1: z_1 = 1, z_3 = 1, z_5 = 1, z_6 = 1</math><br />
<br />
<math>Solution 2: z_2 = 1, z_3 = 1, z_4 = 1, z_5 = 1</math><br />
<br />
The minimum number of cameras that we need to install is 4.<br />
<br />
<br />
<br />
<br />
'''Let's now consider solving the problem using the greedy algorithm.''' <br />
<br />
We have a set <math>U</math> (stadium areas) that needs to be covered with <math>C</math> (camera locations). <br />
<br />
<br />
<math>U = \{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math><br />
<br />
<math>C = \{C_1,C_2,C_3,C_4,C_5,C_6,C_7,C_8\}</math><br />
<br />
<math>C_1 = \{1,3,4,6,7\} </math><br />
<br />
<math>C_2 = \{4,7,8,12\}</math><br />
<br />
<math>C_3 = \{2,5,9,11,13\}</math><br />
<br />
<math>C_4 = \{1,2,14,15\}</math><br />
<br />
<math>C_5 = \{3,6,10,12,14\}</math><br />
<br />
<math>C_6 = \{8,14,15\}</math><br />
<br />
<math>C_7 = \{1,2,6,11\}</math><br />
<br />
<math>C_8 = \{1,2,4,6,8,12\} </math><br />
<br />
<br />
The cost of each Camera Location is the same in this case, we just hope to minimize the total number of cameras used, so we can assume the cost of each <math>C</math> to be 1.<br />
<br />
Let <math>I</math> represents set of elements included so far. Initialize <math>I</math> to be empty.<br />
<br />
First Iteration: <br />
<br />
The per new element cost for <math>C_1 = 1/5</math>, for <math>C_2 = 1/4</math>, for <math>C_3 = 1/5</math>, for <math>C_4 = 1/4</math>, for <math>C_5 = 1/5</math>, for <math>C_6 = 1/3</math>, for <math>C_7 = 1/4</math>, for <math>C_8 = 1/6</math><br />
<br />
Since <math>C_8</math> has minimum value, <math>C_8</math> is added, and <math>I</math> becomes <math>\{1,2,4,6,8,12\}</math>.<br />
<br />
Second Iteration: <br />
<br />
<math>I</math> = <math>\{1,2,4,6,8,12\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_3 = 1/4</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math><br />
<br />
Since <math>C_3</math> has minimum value, <math>C_3</math> is added, and <math>I</math> becomes <math>\{1,2,4,5,6,8,9,11,12,13\}</math>.<br />
<br />
Third Iteration:<br />
<br />
<math>I</math> = <math>\{1,2,4,5,6,8,9,11,12,13\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math><br />
<br />
Since <math>C_5</math> has minimum value, <math>C_5</math> is added, and <math>I</math> becomes <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math>.<br />
<br />
Fourth Iteration:<br />
<br />
<math>I</math> = <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/1</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/0</math>, for <math>C_6 = 1/1</math>, for <math>C_7 = 1/0</math><br />
<br />
Since <math>C_1</math>, <math>C_2</math>, <math>C_6</math> all have meaningful and the same values, we can choose either both <math>C_1</math> and <math>C_6</math> or both <math>C_2</math> and <math>C_6</math>, as <math>C_1</math> or <math>C_2 </math> add <math>7</math> to <math>I</math>, and <math>C_6</math> add <math>15</math> to <math>I</math>.<br />
<br />
<math>I</math> becomes <math>\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math>.<br />
<br />
The solution we obtained is: <br />
<br />
Option 1: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_1</math><br />
<br />
Option 2: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_2</math><br />
<br />
The greedy algorithm does not provide the optimal solution in this case.<br />
<br />
The usual elimination algorithm would give us the minimum number of cameras that we need to install to be4, but the greedy algorithm gives us the minimum number of cameras that we need to install is 5.<br />
<br />
== Applications==<br />
<br />
The applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following.<br />
;The optimal location problem<br />
<br />
This set covering problems is concerned with maximizing the coverage of some public facilities placed at different locations. <ref name="three"> R. Church and C. ReVelle, [https://link.springer.com/article/10.1007/BF01942293 "The maximal covering location problem]," ''Papers of the Regional Science Association'', vol. 32, pp. 101-118, 1974. </ref> Consider the problem of placing fire stations to serve the towns of some city. <ref name="four"> E. Aktaş, Ö. Özaydın, B. Bozkaya, F. Ülengin, and Ş. Önsel, [https://pubsonline.informs.org/doi/10.1287/inte.1120.0671 "Optimizing Fire Station Locations for the Istanbul Metropolitan Municipality]," ''Interfaces'', vol. 43, pp. 240-255, 2013. </ref> If each fire station can serve its town and all adjacent towns, we can formulate a set covering problem where each subset consists of a set of adjacent towns. The problem is then solved to minimize the required number of fire stations to serve the whole city. <br />
<br />
Let <math> y_i </math> be the decision variable corresponding to choosing to build a fire station at town <math> i </math>. Let <math> S_i </math> be a subset of towns including town <math> i </math> and all its neighbors. The problem is then formulated as follows.<br />
<br />
minimize <math>\sum_{i=1}^n y_i</math> <br />
<br />
such that <math> \sum_{i\in S_i} y_i \geq 1, \forall i</math> <br />
<br />
A real-world case study involving optimizing fire station locations in Istanbul is analyzed in this reference. <ref name="four" /> The Istanbul municipality serves 790 subdistricts, which should all be covered by a fire station. Each subdistrict is considered covered if it has a neighboring district (a district at most 5 minutes away) that has a fire station. For detailed computational analysis, we refer the reader to the mentioned academic paper.<br />
; The optimal route selection problem<br />
<br />
Consider the problem of selecting the optimal bus routes to place pothole detectors. Due to the scarcity of the physical sensors, the problem does not allow for placing a detector on every road. The task of finding the maximum coverage using a limited number of detectors could be formulated as a set covering problem. <ref name="five"> J. Ali and V. Dyo, [https://www.scitepress.org/Link.aspx?doi=10.5220/0006469800830088 "Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach]," ''Proceedings of the 14th International Joint Conference on E-Business and Telecommunications'', pp. 83-88, 2017. </ref> <ref name="eleven"> P.H. Cruz Caminha , R. De Souza Couto , L.H. Maciel Kosmalski Costa , A. Fladenmuller , and M. Dias de Amorim, [https://www.mdpi.com/1424-8220/18/6/1976 "On the Coverage of Bus-Based Mobile Sensing]," ''Sensors'', 2018. </ref> Specifically, giving a collection of bus routes '''''R''''', where each route itself is divided into segments. Route <math> i </math> is denoted by <math> R_i </math>, and segment <math> j </math> is denoted by <math> S_j </math>. The segments of two different routes can overlap, and each segment is associated with a length <math> a_j </math>. The goal is then to select the routes that maximize the total covered distance.<br />
<br />
This is quite different from other applications because it results in a maximization formulation, rather than a minimization formulation. Suppose we want to use at most <math> k </math> different routes. We want to find <math> k </math> routes that maximize the length of of covered segments. Let <math> x_i </math> be the binary decision variable corresponding to selecting route <math> R_i </math>, and let <math> y_j </math> be the decision variable associated with covering segment <math> S_j </math>. Let us also denote the set of routes that cover segment <math> j </math> by <math> C_j </math>. The problem is then formulated as follows.<br />
<br />
<math><br />
\begin{align}<br />
\text{max} & ~~ \sum_{j} a_jy_j\\<br />
\text{s.t} & ~~ \sum_{i\in C_j} x_i \geq y_j \quad \forall j \\<br />
& ~~ \sum_{i} x_i = k \\ <br />
& ~~ x_i,y_{j} \in \{0,1\} \\<br />
\end{align}<br />
</math><br />
<br />
The work by Ali and Dyo explores a greedy approximation algorithm to solve an optimal selection problem including 713 bus routes in Greater London. <ref name="five" /> Using 14% of the routes only (100 routes), the greedy algorithm returns a solution that covers 25% of the segments in Greater London. For a details of the approximation algorithm and the world case study, we refer the reader to this reference. <ref name="five" /> For a significantly larger case study involving 5747 buses covering 5060km, we refer the reader to this academic article. <ref name="eleven" /><br />
;The airline crew scheduling problem<br />
<br />
An important application of large-scale set covering is the airline crew scheduling problem, which pertains to assigning airline staff to work shifts. <ref name="two" /> <ref name="six"> E. Marchiori and A. Steenbeek, [https://link.springer.com/chapter/10.1007/3-540-45561-2_36 "An Evolutionary Algorithm for Large Scale Set Covering Problems with Application to Airline Crew Scheduling]," ''Real-World Applications of Evolutionary Computing. EvoWorkshops 2000. Lecture Notes in Computer Science'', 2000. </ref> Thinking of the collection of flights as a universal set to be covered, we can formulate a set covering problem to search for the optimal assignment of employees to flights. Due to the complexity of airline schedules, this problem is usually divided into two subproblems: crew pairing and crew assignment. We refer the interested reader to this survey, which contains several problem instances with the number of flights ranging from 1013 to 7765 flights, for a detailed analysis of the formulation and algorithms that pertain to this significant application. <ref name="two" /> <ref name="eight"> A. Kasirzadeh, M. Saddoune, and F. Soumis [https://www.sciencedirect.com/science/article/pii/S2192437620300820?via%3Dihub "Airline crew scheduling: models, algorithms, and data sets]," ''EURO Journal on Transportation and Logistics'', vol. 6, pp. 111-137, 2017. </ref><br />
<br />
==Conclusion ==<br />
<br />
The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications. <br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Facility_location_problem&diff=2730Facility location problem2020-12-21T11:35:38Z<p>Wc593: </p>
<hr />
<div>Authors: Liz Cantlebary, Lawrence Li (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
The Facility Location Problem (FLP) is a classic optimization problem that determines the best location for a factory or warehouse to be placed based on geographical demands, facility costs, and transportation distances. These problems generally aim to maximize the supplier's profit based on the given customer demand and location<sup>(1)</sup>. FLP can be further broken down into capacitated and uncapacitated problems, depending on whether the facilities in question have a maximum capacity or not<sup>(2)</sup>. <br />
<br />
== Theory and Formulation ==<br />
<br />
=== Weber Problem and Single Facility FLPs ===<br />
The Weber Problem is a simple FLP that consists of locating the geometric median between three points with different weights. The geometric median is a point between three given points in space such that the sum of the distances between the median and the other three points is minimized. It is based on the premise of minimizing transportation costs from one point to various destinations, where each destination has a different associated cost per unit distance. <br />
<br />
Given <math>N</math> points <math>(a_1,b_1)...(a_N,b_N)</math> on a plane with associated weights <math>w_1...w_N</math>, the 2-dimensional Weber problem to find the geometric median <math>(x,y)</math> is formulated as<sup>(1)</sup><br />
<br />
<math>\min\begin{align} W(x,y) = \sum_{i=1}^Nw_id_i(x,y,a_i,b_i)\\ \end{align}</math><br />
<br />
where<br />
<br />
<math>d_i(x,y,a_i,b_i)=\sqrt{(x-a_i)^2+(y-b_i)^2}</math><br />
<br />
The above formulation serves as a foundation for many basic single facility FLPs. For example, the minisum problem aims to locate a facility at the point that minimizes the sum of the weighted distances to the given set of existing facilities, while the minimax problem consists of placing the facility at the point that minimizes the maximum weighted distance to the existing facilities<sup>(3)</sup>. Additionally, in contrast to the minimax problem, the maximin facility problem maximizes the minimum weighted distance to the given facilities.<br />
<br />
=== Capacitated and Uncapacitated FLPs ===<br />
FLPs can often be formulated as mixed-integer programs (MIPs), with a fixed set of facility and customer locations. Binary variables are used in these problems to represent whether a certain facility is open or closed and whether that facility can supply a certain customer. Capacitated and uncapacitated FLPs can be solved this way by defining them as integer programs. <br />
<br />
A capacitated facility problem applies constraints to the production and transportation capacity of each facility. As a result, customers may not be supplied by the most immediate facility, since this facility may not be able to satisfy the given customer demand. <br />
<br />
In a problem with <math>N</math> facilities and <math>M</math> customers, the capacitated formulation defines a binary variable <math>x_i</math> and a variable <math>y_{ij}</math> for each facility <math>i</math> and each customer <math>j</math>. If facility <math>i</math> is open, <math>x_i=1</math>; otherwise <math>x_i=0</math>. Open facilities have an associated fixed cost <math>f_i</math> and a maximum capacity <math>k_i</math>. <math>y_{ij}</math> is the fraction of the total demand <math>d_j</math> of customer <math>j</math> that facility <math>i</math> has satisfied and the transportation cost between facility <math>i</math> and customer <math>j</math> is represented as <math>t_{ij}</math>. The capacitated FLP is therefore defined as<sup>(2)</sup><br />
<br />
<math>\min\ \sum_{i=1}^N\sum_{j=1}^Md_jt_{ij}y_{ij}+\sum_{i=1}^Nf_ix_i</math><br />
<br />
<math>s.t.\ \sum_{i=1}^Ny_{ij}=1\ \ \forall\, j\in\{1,...,M\}</math><br />
<br />
<math>\quad \quad \sum_{j=1}^Md_jy_{ij}\leq k_ix_i\ \ \forall\, i\in\{1,...,N\}</math><br />
<br />
<math>\quad \quad y_{ij}\geq0\ \ \forall\, i\in\{1,...,N\},\ \forall\, j\in\{1,...,M\}</math><br />
<br />
<math>\quad \quad x_i\in\{0,1\}\ \ \forall\, i\in\{1,...,N\}</math><br />
<br />
In an uncapacitated facility problem, the amount of product each facility can produce and transport is assumed to be unlimited, and the optimal solution results in customers being supplied by the lowest-cost, and usually the nearest, facility. Using the above formulation, the unlimited capacity means <math>k_i</math> can be assumed to be a sufficiently large constant, while <math>y_{ij}</math> is now a binary variable, because the demand of each customer can be fully met with the nearest facility<sup>(2)</sup>. If facility <math>i</math> supplies customer <math>j</math>, then <math>y_{ij}=1</math>; otherwise <math>y_{ij}=0</math>.<br />
<br />
=== Approximate and Exact Algorithms ===<br />
A variety of approximate algorithms can be used to solve facility location problems. These algorithms terminate after a given number of steps based on the size of the problem, yielding a feasible solution with an error that does not exceed a constant approximation ratio<sup>(4)</sup>. This ratio <math>r</math> indicates that the approximate solution is no greater than the exact solution by a factor of <math>r</math>. <br />
<br />
While greedy algorithms generally do not perform well on FLPs, the primal-dual greedy algorithm presented by Jain and Vazirani tends to be faster in solving the uncapacitated FLP than LP-rounding algorithms, which solve the LP relaxation of the integer formulation and round the fractional results<sup>(4)</sup>. The Jain-Vazirani algorithm computes the primal and the dual to the LP relaxation simultaneously and guarantees a constant approximation ratio of 1.861<sup>(5)</sup>. This solver has a running time complexity of <math>O(m\log m)</math>, where <math>m</math> corresponds to the number of edges between facilities and cities. Improving upon this primal-dual approach, the modified Jain-Mahdian-Saberi algorithm guarantees a better approximation ratio for the uncapacitated problem<sup>(5)</sup>. <br />
<br />
To solve the capacitated FLP, which often contains more complex constraints, many algorithms utilize a Lagrangian decomposition<sup>(6)</sup>, first introduced by Held and Karp in the traveling salesman problem<sup>(7)</sup>. This approach allows constraints to be relaxed by penalizing this relaxation while solving a simplified problem. The capacitated problem has been effectively solved using this Lagrangian relaxation in conjunction with the volume algorithm, which is a variation of subgradient optimization presented by Barahona and Anbil<sup>(8)</sup>.<br />
<br />
Exact methods have also been presented for solving FLPs. To solve the <math>p<br />
</math>-median capacitated facility location problem, Ceselli introduces a branch-and-bound method that solves a Lagrangian relaxation with subgradient optimization, as well as a separate branch-and-price algorithm that utilizes column generation<sup>(9)</sup>. Ceselli's work indicates that branch-and-bound works well when the ratio of <math>p<br />
</math> sites to <math>N</math> customers is low, but the performance and run-time worsen significantly as this ratio increases. In comparison, the branch-and-price method demonstrates much more stable performance across various problem sizes and is generally faster overall.<br />
<br />
== Numerical Example ==<br />
Suppose a paper products manufacturer has enough capital to build and manage an additional manufacturing plant in the United States in order to meet increased demand in three cities: New York City, NY, Los Angeles, CA, and Topeka, KS. The company already has distribution facilities in Denver, CO, Seattle, WA, and St. Louis, MO, and due to limited capital, cannot build an additional distribution facility. So, they must choose to build their new plant in one of these three locations. Due to geographic constraints, plants in Denver, Seattle, and St. Louis would have a maximum operating capacity of 400 tons/day, 700 tons/day, and 600 tons/day, respectively. The cost of transporting the products from the plant to the city is directly proportional, and an outline of the supply, demand, and cost of transportation is shown in the figure below. Regardless of where the plant is built, the selling price of the product is $100/ton. <br />
[[File:Example.png|center|780x780px]]<br />
'''Exact Solution''' <br />
<br />
To solve this problem, we will assign the following variables: <br />
<br />
<math>i</math> is the factory location<br />
<br />
<math>j</math> is the city destination<br />
<br />
<math>C_{ij}</math> is the cost of transporting one ton of product from the factory to the city<br />
<br />
<math>x_{ij}</math> is the amount of product transported from the factory to the city in tons<br />
<br />
<math>A_i</math> is the maximum operating capacity at the factory <br />
<br />
<math>D_j</math> is the amount of unmet demand in the city <br />
<br />
<br />
To determine where the company should build the factory, we will carry out the following optimization problem for each location to maximize the profit from each ton sold:<br />
<br />
max <math>\sum_{j\in J}x_{ij}(100-C_{ij}) </math><br />
<br />
subject to<br />
<br />
<math>\sum_{j\in J}x_{ij} \leq A_i </math> <math>\forall i\in I</math><br />
<br />
<math>\sum_{i\in I}x_{ij} \leq D_j</math> <math>\forall j\in J</math><br />
<br />
<math>x_{ij} \geq 0 </math> <math>\forall i \in I,</math> <math>\forall j \in J</math><br />
<br />
<br />
The problem is solved in GAMS (General Algebraic Modeling System).<br />
<br />
If the factory is built in Denver, 300 tons/day of product go to Los Angeles and 100 tons/day go to Topeka, for a total profit of $36,300/day.<br />
<br />
If the factory is built in Seattle, 300 tons/day of product go to Los Angela, 100 tons/day of product go to Topeka, and 300 tons/day go to New York City, for a total profit of $56,500/day.<br />
<br />
If the factory is built in St. Louis, 100 tons/day of product go to Topeka and 500 tons/day go to New York City, for a total profit of $55,200/day.<br />
<br />
Therefore, to maximize profit, the factory should be built in Seattle.<br />
<br />
<br />
'''Approximate Solution'''<br />
<br />
<br />
This example can also be solved approximately through the branch and bound method. The tree diagram showing the optimization is shown below. <br />
<br />
[[File:Branch and bound.png|center|frame|Branch and bound approach]]<br />
As shown in the tree diagram, building factories in both Denver and St. Louis would yield the highest profit of $82,200/day. Unfortunately, the company only has enough capital to build one facility. As a result of this, the only acceptable values are those in which one value is "1" and two are "0". Based on this constraint, it is clear that the company should build the factory in Seattle, as shown in the exact solution above. However, this also yields valuable information if the company hopes to expand again in the near future, because building a factories in St. Louis and Denver is more profitable than building factories in Seattle and Denver or Seattle and St. Louis. Depending on company projections, it may be a better decision to build the first factory St. Louis and aim to build an additional factory in Denver as soon as possible. <br />
<br />
== Applications ==<br />
[[File:BadranElHaggarFacilityLocation.jpg|thumb|321x321px|Map of optimal collection stations in Port Said, Egypt<sup>(12)</sup>.]]<br />
Facility location problems are utilized in many industries to find the optimal placement of various facilities, including warehouses, power plants, public transportation terminals, polling locations, and cell towers, to maximize efficiency, impact, and profit. In more unique applications, extensive research has been done in applying FLPs to humanitarian efforts, such as identifying disaster management sites to maximize accessibility to healthcare and treatment<sup>(10)</sup>. A case study by researchers in Nigeria explored the application of mixed-integer FLPs in optimizing the locations of waste collection centers to provide sanitation services in crucial communities. More effective waste collection systems could combat unsanitary practices and environmental pollution, which are major concerns in many developing nations<sup>(11)</sup>. For example, Badran and El-Haggar proposed a solid waste management system for Port Said, Egypt, implementing a mixed-integer program to optimally place waste collection stations and minimize cost<sup>(12)</sup>. This program was formulated to select collection stations from a set of locations such that the sum of the fixed cost of opening collections stations, the operating costs of the collection stations, and the transportation costs from the collection stations to the composting plants is minimized. <br />
<br />
FLPs have also been used in clustering analysis, which involves partitioning a given set of elements (e.g. data points) into different groups based on the similarity of the elements. The elements can be placed into groups by identifying the locations of center points that effectively partition the set into clusters, based on the distances from the center points to each element<sup>(13)</sup>. For example, the <math>k</math>-median clustering problem can be formulated as a FLP that selects a set of <math>k</math> cluster centers to minimize the cost between each point and its closest center. The cost in this problem is represented as the Euclidean distance <math>d(i,j)</math> between a point <math>i</math> and a proposed cluster center <math>j</math>. The problem can be formulated as the following integer program, which selects <math>k</math> centers from a set of <math>N</math> points<sup>(13)</sup>. <br />
<br />
<math>\min\ \sum_{i=1}^N x_{ij}d(ij)</math> <br />
<br />
<math>s.t.\ \sum_{j=1}^Ny_j\leq k</math> <br />
<br />
<math>\quad \quad \sum_{j=1}^Nx_{ij}=1</math> <br />
<br />
<math>\quad \quad x_{ij}\leq y_j</math> <br />
<br />
<math>\quad \quad x_{ij}, y_j\in\{0,1\}</math> <br />
<br />
In this formulation, the binary variables <math>y_j</math> and <math>x_{ij}</math> represent whether <math>j</math> is used as a center point and whether <math>j</math> is the optimal center for <math>i</math>, respectively. The <math>k</math>-median problem is NP-hard and is commonly solved using approximation algorithms. One of the most effective algorithms to date, proposed by Byrka et al., has an approximation factor of 2.611<sup>(13)</sup>. <br />
<br />
== Conclusion ==<br />
The facility location problem is an important application of computational optimization. The uses of this optimization technique are far-reaching, and can be used to determine anything from where a family should live based on the location of their workplaces and school to where a Fortune 500 company should put a new manufacturing plant or distribution facility to maximize their return on investment. <br />
<br />
== References ==<br />
<br />
# Drezner, Z; Hamacher. H. W. (2004), ''Facility Location Applications and Theory''. New York, NY: Springer.<br />
# Francis, R. L.; Mirchandani, P. B. (1990), ''Discrete Location Theory''. New York, NY: Wiley.<br />
# Hansen, P., et al. (1985), [https://pubsonline.informs.org/doi/abs/10.1287/opre.33.6.1251 The Minisum and Minimax Location Problems Revisited.] ''Operations Research, 33'', 6, 1251-1265.<br />
# Vygen, J. (2005), ''Approximation Algorithms for Facility Location Problems''. Research Institute for Discrete Mathematics, University of Bonn.<br />
# Jain, K., et al. (2003), [https://dl.acm.org/doi/10.1145/950620.950621 A Greedy Facility Location Algorithm Analyzed Using Dual Fitting with Factor-Revealing LP.] ''Journal of the ACM, 50'', 6, 795-824.<br />
# Alenezy, E. J. (2020), [https://www.hindawi.com/journals/aor/2020/5239176/ Solving Capacitated Facility Location Problem Using Lagrangian Decomposition and Volume Algorithm.] ''Advances in Operations Research,'' ''2020'', 5239176, 2020.<br />
# Held, M.; Karp, R. M. (1970), [https://pubsonline.informs.org/doi/abs/10.1287/opre.18.6.1138 The Traveling-Salesman Problem and Minimum Spanning Trees.] ''Operations Research, 18,'' 6, 1138-1162.<br />
# Barahona, F.; Anbil, R. (2000), [https://link.springer.com/article/10.1007%2Fs101070050002 The Volume Algorithm: Producing Primal solutions with a Subgradient Method.] ''Mathematical Programming, 87,'' 3, 385–399.<br />
# Ceselli, A. (2003), [https://link.springer.com/article/10.1007/s10288-003-0023-5 Two Exact Algorithms for the Capacitated p-Median Problem.] ''Quarterly Journal of the Belgian, French and Italian Operations Research Societies, 4'', 1, 319-340.<br />
# Daskin, M. S.; Dean, L. K. (2004), [https://link.springer.com/chapter/10.1007/1-4020-8066-2_3 Location of Health Care Facilities.] ''Handbook of OR/MS in Health Care: A Handbook of Methods and Applications'', 43-76.<br />
# Adeleke, O. J.; Olukanni, D. O. (2020), [https://www.mdpi.com/2313-4321/5/2/10 Facility Location Problems: Models, Techniques, and Applications in Waste Management.] ''Recycling, 5'', 10.<br />
# Badran, M.F.; El-Haggar, S.M. (2006), [https://www.sciencedirect.com/science/article/abs/pii/S0956053X05001534 Optimization of Municipal Solid Waste Management in Port Said – Egypt.] ''Waste Management, 26'', 5, 534-545.<br />
# Meira, L. A. A., et al. (2017), [https://www.sciencedirect.com/science/article/abs/pii/S030439751630514X Clustering through Continuous Facility Location Problems.] ''Theoretical Computer Science, 657'', 137-145.<br />
# Balcik, B.; Beamon, B. M. (2008), [https://www.tandfonline.com/doi/full/10.1080/13675560701561789 Facility Location in Humanitarian Relief.] ''International Journal of Logistics Research and Applications, 11'', 101-121.<br />
# Eiselt, H. A.; Marianov, V. (2019), ''Contributions to Location Analysis''. Cham, Switzerland: Springer.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Eight_step_procedures&diff=2729Eight step procedures2020-12-21T11:35:03Z<p>Wc593: </p>
<hr />
<div>Author: Eljona Pushaj, Diana Bogdanowich, Stephanie Keomany (SysEn 5800 Fall 2020)<br />
<br />
=Introduction=<br />
The eight-step procedures are a simplified, multi-stage approach for determining optimal solutions in mathematical optimization. Dynamic programming, developed by Richard Bellman in the 1950s<ref>Bellman, Richard. “The Theory of Dynamic Programming.” Bulletin of American Mathematical Society, vol. 60, 1954, pp 503–515, https://www.ams.org/journals/bull/1954-60-06/S0002-9904-1954-09848-8/S0002-9904-1954-09848-8.pdf. 18 Nov 2020.</ref>, is used to solve for the maximization or minimization of the objective function by transforming the problem into smaller steps and enumerating all the different possible solutions and finding the optimal solution.<br />
<br />
In the eight-step procedure, a problem can be broken down into subproblems to solve. Using the solutions from the subproblems in a recursive manner, the solution can be determined after all the solutions of the subproblems are calculated to find the best solution, which demonstrates the principle of optimality: Any optimal policy has the property that, whatever the current state and current decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the current decision.<ref>Bradley, Stephen P. Applied Mathematical Programming. Addison-Wesley. 1 February 1977. 320-342. 18 Nov 2020</ref> Such a standard framework is used so that dynamic programming store the values of the subproblems to avoid recomputing, and thus, reduce time to solve the problem.<ref>Gavin-Hughes, Sam. “Dynamic Programming for Interviews.” Byte by Byte. https://www.byte-by-byte.com/dpbook/. 18 Nov 2020</ref><br />
<br />
=Theory, Methodology, and/or Algorithmic Discussion=<br />
<br />
===Methodology===<br />
To solve a problem using the 8-step procedure, one must use the following steps:<br /><br />
<br /><br />
<br />
'''Step 1: Specify the stages of the problem''' <br /><br />
The stages of a dynamic programming problem can be defined as points where decisions are made. Specifying the stages also divides the problem into smaller pieces.<br /><br />
<br /><br />
<br />
'''Step 2: Specify the states for each stage''' <br /><br />
The states of a problem are defined as the knowledge necessary to make a decision. There are multiple states for each stage. In general, the states consists of the information that is needed to solve the smaller problem within each stage.<ref>Chinneck. (2015). Chapter 15 Dynamic Programming. Carleton.Ca. https://www.sce.carleton.ca/faculty/chinneck/po/Chapter15.pdf</ref><br /><br />
<br /><br />
<br />
'''Step 3: Specify the allowable actions for each state in each stage''' <br /><br />
This helps create for a decision that must be made at each stage. <br /><br />
<br /><br />
<br />
'''Step 4: Describe the optimization function using an English-language description.''' <br /><br />
<br /><br />
<br />
'''Step 5: Define the boundary conditions''' <br /><br />
This can help create a starting point to finding a solution to the problem. <br /><br />
<br /><br />
<br />
'''Step 6: Define the recurrence relation''' <br /><br />
This is often denoted with a function, and shows the relationsip between the value of a decision at a particular stage and the value of optimal decision made at the previous stages. <br /><br />
<br /><br />
<br />
'''Step 7: Compute the optimal value from the bottom-up''' <br /><br />
This step can be done manually or by using programming. Note that for each state, an optimal decision made at the remaining stages of the problem is independent from the decisions of the previous states. <br /><br />
<br /><br />
<br />
'''Step 8: Arrive at the optimal solution''' <br /><br />
This is the final step for solving a problem using the eight step procedure. <br /><br />
<br />
=Numerical Example=<br />
''Suppose we have a knapsack with a weight capacity of C=5 and N=2 types of items. An item of type n weighs W [n] and generates a benefit of b [n,j] when packing j items of type n to the knapsack however only a[n] units of this item are available.'' <br />
<br />
To solve a Knapsack problem we use the following steps: <br />
<br />
<br />
'''Step 1: Specify the stages of the problem''' <br />
<br />
Weight capacity of C=5 and N=2<br />
<br />
<br />
'''Step 2: Specify the states for each stage''' <br />
<br />
Item types are stages: n=1,2<br />
<br />
<br />
'''Step 3: Specify the allowable actions for each state in each stage'''<br />
<br />
<math> <br />
U_{2}(5)\, =\, 0,1,...,min\left \{ a[2], \left \lfloor \frac{5}{w[2]}\right \rfloor \right \}<br />
</math>= '''{0,1,2}'''<br />
<br />
'''Step 4: Describe the optimization function using an English-language description.'''<br />
<br />
Remaining capacity s= 1,2,3,4,5 <br />
<br />
'''Step 5: Define the boundary conditions''' <br />
<br />
Boundary Conditions: <br />
<br />
<math>f^{*}_{n+1}(s) = 0</math>, ''s=0,1,2,3,4,5'' ''C=5''<br />
<br />
<br />
'''Step 6: Define the recurrence relation''' <br />
<br />
<math> f^{*}_{2}(5)= max\left \{ b[2,j]+ f^{*}_{3}(5-j*w[2]) \right \} </math> <br />
<br />
'''Step 7: Compute the optimal value from the bottom-up''' <br />
<br />
<math> f^{*}_{2}(5)= max\left \{ b[2,j]+ f^{*}_{3}(5-j*w[2]) \right \} </math> <br />
<br />
<math>f^{*}_{n+1}(s) = 0</math>, ''s=0,1,2,3,4,5'' ''C=5'' <br />
{| class="wikitable"<br />
|+<br />
!Unused Capacity s<br />
!<math>f^{*}_{1}(s)</math><br />
!Type 1 opt <math>U^{*}_{1}(s)</math><br />
!<math>f^{*}_{2}(s)</math><br />
!Type 2 opt <math>U^{*}_{2}(s)</math><br />
!<math>f^{*}_{3}(s)</math><br />
|-<br />
|5<br />
|9<br />
|0<br />
|9<br />
|2<br />
|0<br />
|-<br />
|4<br />
|9<br />
|0<br />
|9<br />
|2<br />
|0<br />
|-<br />
|3<br />
|4<br />
|0<br />
|4<br />
|1<br />
|0<br />
|-<br />
|2<br />
|4<br />
|0<br />
|4<br />
|1<br />
|0<br />
|-<br />
|1<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|-<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|}<br />
<br />
<br />
'''Step 8: Arrive at the optimal solution''' <br />
<br />
=Applications=<br />
The following are some applications where dynamic programming is used. The criteria for applying dynamic programming to an optimization problem are if the objective function involves maximization, minimization, or counting and if the problem is determined by finding all the solutions to find the optimal solution.<br />
<br />
'''Shortest/ Longest Path Problem'''<br />
<br />
In the shortest path problem, the path with the least amount of cost or value must be determined in a problem with multiple nodes in between the beginning node ''s'' to the final node ''e''. Travelling from one node to another incurs a value or cost ''c(p, q''), and the objective is to reach t with the smallest cost possible. The eight-step procedure can be used to determine the possible solutions which the optimal solution can be determined from.<ref>Neumann K. “Dynamic Programming Basic Concepts and Applications.” Optimization in Planning and Operations of Electric Power Systems. Physica, Heidelberg, 1993, p 31-56.</ref><br />
<br />
Likewise, but in a maximization function, the longest path can be determined in a problem by determining the solution with the highest cost involved to travel from node ''s'' to node ''e''.<br />
<br />
'''Knapsack Problem'''<br />
<br />
The knapsack problem is an example of determining the distribution of effort or when there are limited resources to be shared with competing entities, and the goal is to maximize the benefit of the distribution. Dynamic programming is used when the increase in benefit in regard to increasing the quantity of resources is not linearly proportional. The volume may also be considered in addition to the weight of the resources. A volume constraint is added to the problem and represented in the state by stage ''n'' by an ordered pair (''s, v'') for remaining weight and volume. By considering ''d'' constraints, the number of states can grow exponentially with a ''d'' -dimensional state space even if the value of ''d'' is small. The problem becomes infeasible to solve and is referred to as the curse of dimensionality. However, the curse has faded due to advances in computational power.<ref>Taylor, C. Robert. Applications Of Dynamic Programming To Agricultural Decision Problems. United States, CRC Press, 2019.</ref><br />
<br />
'''Inventory Planning Problem'''<br />
<br />
In inventory management, dynamic programming is used to determine how to meet anticipated and unexpected demand in order to minimize overall costs. Tracking an inventory system involves establishing a set of policies that monitor and control the levels of inventory, determining when a stock must be replenished, and the quantity of parts to order. For example, a production schedule can be computationally solved by knowing the demand, unit production costs, and inventory supply limits in order to keep the production costs below a certain rate.<ref>Bellman, Richard. “Dynamic Programming Approach to Optimal Inventory Processes with Delay in Delivery.” Quarterly of Applied Mathematics, vol 18, 1961, p. 399-403, https://www.ams.org/journals/qam/1961-18-04/S0033-569X-1961-0118516-2/S0033-569X-1961-0118516-2.pdf. 19 Nov 2020</ref><br />
<br />
'''Needleman-Wunsh Algorithm (Global Sequence Alignment)'''<br />
<br />
Developed by Saul B. Needleman and Christian D. Wunsch in 1970, the Needleman-Wunsh algorithm, also known as global sequence alignment, is used to find similarities within protein or nucleotide sequences. This algorithm is an application of dynamic programming used to divide a large problem such as a large sequence into smaller subproblems and the solutions of the subproblems are used to find the optimal sequences with the highest scores. A matrix is constructed consisting of strings of the protein or nucleotide sequences. A scoring system is determined for each of the nucleotide pairs (adenine, guanine, cytosine, thymine) where there could exist a match (+1), mismatch (-1), or gap (-1). The sum of the scores determine the entire alignment pair. Then the scores are calculated for the pairs and filled out in the matrix. To find the optimal alignment, one would perform a "traceback" by starting at the upper left matrix to the bottom right. The algorithm is limited in that it can align only entire proteins.<ref>Needleman, S. B. and Wunsch, C. D. "A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins." J. Mol. Biol. 48, 1970, p. 443-453.</ref><br />
<br />
=Conclusion=<br />
The eight-step procedure is an approach used in dynamic programming to transform a problem into simpler problems to yield an optimal solution. The recursive nature of the procedure allows for the optimization problems to be solved using computational models that reduce time and effort and can be used in many applications across many industries.<br />
<br />
=References=<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Markov_decision_process&diff=2728Markov decision process2020-12-21T11:34:23Z<p>Wc593: </p>
<hr />
<div>Author: Eric Berg (eb645) (SysEn 5800 Fall 2020)<br />
<br />
= Introduction =<br />
A Markov Decision Process (MDP) is a stochastic sequential decision making method.<math>^1</math> Sequential decision making is applicable any time there is a dynamic system that is controlled by a decision maker where decisions are made sequentially over time. MDPs can be used to determine what action the decision maker should make given the current state of the system and its environment. This decision making process takes into account information from the environment, actions performed by the agent, and rewards in order to decide the optimal next action. MDPs can be characterized as both finite or infinite and continuous or discrete depending on the set of actions and states available and the decision making frequency.<math>^1</math> This article will focus on discrete MDPs with finite states and finite actions for the sake of simplified calculations and numerical examples. The name Markov refers to the Russian mathematician Andrey Markov, since the MDP is based on the Markov Property. In the past, MDPs have been used to solve problems like inventory control, queuing optimization, and routing problems.<math>^2</math> Today, MDPs are often used as a method for decision making in the reinforcement learning applications, serving as the framework guiding the machine to make decisions and "learn" how to behave in order to achieve its goal.<br />
<br />
= Theory and Methodology =<br />
A MDP makes decisions using information about the system's current state, the actions being performed by the agent and the rewards earned based on states and actions.<br />
<br />
The MDP is made up of multiple fundamental elements: the agent, states, a model, actions, rewards, and a policy.<math>^1</math> The agent is the object or system being controlled that has to make decisions and perform actions. The agent lives in an environment that can be described using states, which contain information about the agent and the environment. The model determines the rules of the world in which the agent lives, in other words, how certain states and actions lead to other states. The agent can perform a fixed set of actions in any given state. The agent receives rewards based on its current state. A policy is a function that determines the agent's next action based on its current state. [[File:Reinforcement Learning.png|thumb|Reinforcement Learning framework used in Markov Decision Processes]]'''MDP Framework:'''<br />
<br />
*<math>S</math> : States (<math>s \epsilon S</math>)<br />
*<math>A</math> : Actions (<math>a \epsilon A</math>)<br />
*<math>P(S_{t+1} | s_t, a_t)</math> : Model determining transition probabilities<br />
*<math>R(s)</math>: Reward<br /><br />
In order to understand how the MDP works, first the Markov Property must be defined. The Markov Property states that the future is independent of the past given the present.<math>^4</math> In other words, only the present is needed to determine the future, since the present contains all necessary information from the past. The Markov Property can be described in mathematical terms below:<br />
<br />
<math display="inline">P[S_{t+1} | S_t] = P[S_{t+1} | S_1, S_2, S_3... S_t]</math><br />
<br />
The above notation conveys that the probability of the next state given the current state is equal to the probability of the next state given all previous states. The Markov Property is relevant to the MDP because only the current state is used to determine the next action, the previous states and actions are not needed. <br />
<br />
'''The Policy and Value Function'''<br />
<br />
The policy, <math>\Pi</math> , is a function that maps actions to states. The policy determines which is the optimal action given the current state to achieve the maximum total reward. <br />
<br />
<math>\Pi : S \rightarrow A </math><br />
<br />
Before the best policy can be determined, a goal or return must be defined to quantify rewards at every state. There are various ways to define the return. Each variation of the return function tries to maximize rewards in some way, but differs in which accumulation of rewards should be maximized. The first method is to choose the action that maximizes the expected reward given the current state. This is the myopic method, which weighs each time-step decision equally.<math>^2</math> Next is the finite-horizon method, which tries to maximize the accumulated reward over a fixed number of time steps.<math>^2</math> But because many applications may have infinite horizons, meaning the agent will always have to make decisions and continuously try to maximize its reward, another method is commonly used, known as the infinite-horizon method. In the infinite-horizon method, the goal is to maximize the expected sum of rewards over all steps in the future. <math>^2</math> When performing an infinite sum of rewards that are all weighed equally, the results may not converge and the policy algorithm may get stuck in a loop. In order to avoid this, and to be able prioritize short-term or long term-rewards, a discount factor, <math>\gamma <br />
</math>, is added. <math>^3</math> If <math>\gamma <br />
</math> is closer to 0, the policy will choose actions that prioritize more immediate rewards, if <math>\gamma <br />
</math> is closer to 1, long-term rewards are prioritized.<br />
<br />
Return/Goal Variations:<br />
<br />
* Myopic: Maximize <math>E[ r_t | \Pi , s_t ] <br />
</math> , maximize expected reward for each state<br />
* Finite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^k \displaystyle r_t | \Pi , s_t ] <br />
</math> , maximize sum of expected reward over finite horizon<br />
* Discounted Infinite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^\infty \displaystyle \gamma^t r_t | \Pi , s_t ] <br />
</math> <math>\gamma \epsilon [0,1] <br />
</math>, maximize sum of discounted expected reward over infinite horizon<br />
The value function, <math>V(s) <br />
</math>, characterizes the return at a given state. Most commonly, the discounted infinite horizon return method is used to determine the best policy. Below the value function is defined as the expected sum of discounted future rewards.<br />
<br />
<math>V(s) = E[ \sum_{t=0}^\infty \gamma^t r_t | s_t ] <br />
</math><br />
<br />
The value function can be decomposed into two parts, the immediate reward of the current state, and the discounted value of the next state. This decomposition leads to the derivation of the [[Bellman equation|Bellman Equation]],, as shown in equation (2). Because the actions and rewards are dependent on the policy, the value function of an MDP is associated with a given policy.<br />
<br />
<math>V(s) = E[ r_{t+1} + \gamma V(s_{t+1}) | s_t] <br />
</math> , <math>s_{t+1} = s' <br />
</math><br />
<br />
<math>V(s) = R(s) + \gamma \sum_{s' \epsilon S}P_{ss'}V(s') <br />
</math><br />
<br />
<math>V^{\Pi}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V(s') <br />
</math> (1)<br />
<br />
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')] <br />
</math> (2)<br />
<br />
The optimal value function can be solved iteratively using iterative methods such as dynamic programming, Monte-Carlo evaluations, or temporal-difference learning.<math>^5</math> <br />
<br />
The optimal policy is one that chooses the action with the largest optimal value given the current state:<br />
<br />
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] <br />
</math> (3)<br />
<br />
The policy is a function of the current state, meaning at each time step a new policy is calculated considering the present information. The optimal policy function can be solved using methods such as value iteration, policy iteration, Q-learning, or linear programming. <math>^{5,6}</math><br />
<br />
'''Algorithms'''<br />
<br />
The first method for solving the optimality equation (2) is using value iteration, also known as successive approximation, backwards induction, or dynamic programming. <math>^{1,6}</math><br />
<br />
Value Iteration Algorithm:<br />
<br />
# Initialization: Set <math>V^{*}_0(s) = 0 <br />
</math> for all <math>s \epsilon S</math> , choose <math>\varepsilon >0 <br />
</math>, n=1<br />
# Value Update: For each <math>s \epsilon S</math>, compute: <math>V^{*}_{n+1}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*_n(s')] <br />
</math><br />
# If <math>| V_{n+1} - V_n | < \varepsilon <br />
</math>, the algorithm has converged and the optimal value function, <math>V^* <br />
</math>, has been determined, otherwise return to step 2 and increment n by 1.<br />
The value function approximation becomes more accurate at each iteration because more future states are considered. The value iteration algorithm can be slow to converge in certain situations, so an alternative algorithm can be used which converges more quickly.<br />
<br />
Policy Iteration Algorithm:<br />
<br />
# Initialization: Set an arbitrary policy <math>\Pi(s) <br />
</math> and <math>V(s) <br />
</math> for all <math>s \epsilon S</math>, choose <math>\varepsilon >0 <br />
</math>, n=1<br />
# Policy Evaluation: For each <math>s \epsilon S</math>, compute: <math>V^{\Pi}_{n+1}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V^{\Pi}_n(s') <br />
</math><br />
# If <math>| V_{n+1} - V_n | < \varepsilon <br />
</math>, the optimal value function, <math>V^* <br />
</math> has been determined, continue to next step, otherwise return to step 2 and increment n by 1.<br />
# Policy Update: For each <math>s \epsilon S</math>, compute: <math>\Pi_{n+1}(s) = argmax_a [R(s,\Pi_n(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi_n(s))V^{\Pi}_n(s')] <br />
</math><br />
# If <math>\Pi_{n+1} = \Pi_n <br />
</math> ,the algorithm has converged and the optimal policy, <math>\Pi^* <br />
</math> has been determined, otherwise return to step 2 and increment n by 1.<br />
<br />
With each iteration the optimal policy is improved using the previous policy and value function until the algorithm converges and the optimal policy is found.<br />
<br />
= Numerical Example =<br />
[[File:Markov Decision Process Example 2.png|alt=|thumb|499x499px|A Markov Decision Process describing a college student's hypothetical situation.]]<br />
As an example, the MDP can be applied to a college student, depicted to the right. In this case, the agent would be the student. The states would be the circles and squares in the diagram, and the arrows would be the actions. The action between work and school is leave work and go to school. In the state that the student is at school, the allowable actions are to go to the bar, enjoy their hobby, or sleep. The probabilities assigned to each state given the previous state and action in this example is 1. The rewards associated with each state are written in red.<br />
<br />
Assume <math>P(s'|s) = 1.0<br />
<br />
</math> , <math>\gamma <br />
</math> =1.<br />
<br />
First, the optimal value functions must be calculated for each state.<br />
<br />
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')] <br />
</math><br />
<br />
<math>V^{*}(Hobby) = max_a [3 + (1)(1.0*0)] = 3 <br />
</math><br />
<br />
<math>V^{*}(Bar) = max_a [2 + 1(1.0*0)] = 2 <br />
</math> <br />
<br />
<math>V^*(Sleep) = max_a[0 + 1(1.0*0)] = 0 <br />
</math><br />
<br />
<math>V^*(School) = max_a[ -2 + 1(1.0*2) , -2 + 1(1.0*0) , -2 + 1(1.0*3)] = 1 <br />
</math><br />
<br />
<math>V^*(YouTube) = max_a[-1 + 1(1.0*-1) , -1 +1(1.0*1)]= 0 <br />
</math><br />
<br />
<math>V^*(Work) = max_a[1 + 1(1.0*0) , 1 + 1(1.0*1)] = 2 <br />
</math><br />
<br />
Then, the optimal policy at each state will choose the action that generates the highest value function.<br />
<br />
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] <br />
</math><br />
<br />
<math>\Pi^*(YouTube) = argmax_a [0,2] \rightarrow a = <br />
</math> Work<br />
<br />
<math>\Pi^*(Work) = argmax_a [0,1] \rightarrow a = <br />
</math> School<br />
<br />
<math>\Pi^*(School) = argmax_a [0,2,3] \rightarrow a = <br />
</math> Hobby<br />
<br />
Therefore, the optimal policy in each state provides a sequence of decisions that generates the optimal path sequence in this decision process. As a results, if the student starts in state Work, he/she should choose to go to school, then to enjoy their hobby, then go to sleep.<br />
<br />
= Applications =<br />
[[File:Pong.jpg|thumb|Computer playing Pong arcade game by Atari using reinforcement learning]]<br />
MDPs have been applied in various fields including operations research, electrical engineering, computer science, manufacturing, economics, finance, and telecommunication.<math>^2</math> For example, the sequential decision making process described by MDP can be used to solve routing problems such as the [[Traveling salesman problem]]. In this case, the agent is the salesman, the actions available are the routes available to take from the current state, the rewards in this case are the costs of taking each route, and the goal is to determine the optimal policy that minimizes the cost function over the duration of the trip. Another application example is maintenance and repair problems, in which a dynamic system such as a vehicle will deteriorate over time due to its actions and the environment, and the available decisions at every time epoch is to do nothing, repair, or replace a certain component of the system.<math>^2</math> This problem can be formulated as an MDP to choose the actions that to minimize cost of maintenance over the life of the vehicle. MDPs have also been applied to optimize telecommunication protocols, stock trading, and queue control in manufacturing environments. <math>^2</math> <br />
<br />
Given the significant advancements in artificial intelligence and machine learning over the past decade, MDPs are being applied in fields such as robotics, automated systems, autonomous vehicles, and other complex autonomous systems. MDPs have been used widely within reinforcement learning to teach robots or other computer-based systems how to do something they were previously were unable to do. For example, MDPs have been used to teach a computer how to play computer games like Pong, Pacman, or AlphaGo.<math>^{7,8}</math> DeepMind Technologies, owned by Google, used the MDP framework in conjunction with neural networks to play Atari games better than human experts. <math>^7</math> In this application, only the raw pixel input of the game screen was used as input, and a neural network was used to estimate the value function for each state, and choose the next action.<math>^7</math> MDPs have been used in more advanced applications to teach a simulated human robot how to walk and run and a real legged-robot how to walk.<math>^9</math> <br />
[[File:Google Deepmind.jpg|thumb|Google's DeepMind uses reinforcement learning to teach AI how to walk]]<br />
<br />
= Conclusion =<br />
<br />
A MDP is a stochastic, sequential decision-making method based on the Markov Property. MDPs can be used to make optimal decisions for a dynamic system given information about its current state and its environment. This process is fundamental in reinforcement learning applications and a core method for developing artificially intelligent systems. MDPs have been applied to a wide variety of industries and fields including robotics, operations research, manufacturing, economics, and finance.<br />
<br />
= References =<br />
<br />
<references /><br />
<br />
# Puterman, M. L. (1990). Chapter 8 Markov decision processes. In ''Handbooks in Operations Research and Management Science'' (Vol. 2, pp. 331–434). Elsevier. <nowiki>https://doi.org/10.1016/S0927-0507(05)80172-0</nowiki><br />
# Feinberg, E. A., & Shwartz, A. (2012). ''Handbook of Markov Decision Processes: Methods and Applications''. Springer Science & Business Media.<br />
# Howard, R. A. (1960). ''Dynamic programming and Markov processes.'' John Wiley.<br />
# Ashraf, M. (2018, April 11). ''Reinforcement Learning Demystified: Markov Decision Processes (Part 1)''. Medium. <nowiki>https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690</nowiki><br />
# Bertsekas, D. P. (2011). Dynamic Programming and Optimal Control 3rd Edition, Volume II. ''Massachusetts Institue of Technology'', 233.<br />
# Littman, M. L. (2001). Markov Decision Processes. In N. J. Smelser & P. B. Baltes (Eds.), ''International Encyclopedia of the Social & Behavioral Sciences'' (pp. 9240–9242). Pergamon. <nowiki>https://doi.org/10.1016/B0-08-043076-7/00614-8</nowiki><br />
# Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning. ''ArXiv:1312.5602 [Cs]''. <nowiki>http://arxiv.org/abs/1312.5602</nowiki><br />
# Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. ''Science'', ''362''(6419), 1140–1144. <nowiki>https://doi.org/10.1126/science.aar6404</nowiki><br />
# Ha, S., Xu, P., Tan, Z., Levine, S., & Tan, J. (2020). Learning to Walk in the Real World with Minimal Human Effort. ''ArXiv:2002.08550 [Cs]''. <nowiki>http://arxiv.org/abs/2002.08550</nowiki><br />
# Bellman, R. (1966). Dynamic Programming. ''Science'', ''153''(3731), 34–37. <nowiki>https://doi.org/10.1126/science.153.3731.34</nowiki><br />
# Abbeel, P. (2016). ''Markov Decision Processes and Exact Solution Methods:'' 34.<br />
# Silver, D. (2015). Markov Decision Processes. ''Markov Processes'', 57.<br />
<span title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lecture%202%3A%20Markov%20Decision%20Processes&rft.jtitle=Markov%20Processes&rft.aufirst=David&rft.aulast=Silver&rft.au=David%20Silver&rft.pages=57&rft.language=en" class="Z3988"></span></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Network_flow_problem&diff=2727Network flow problem2020-12-21T11:33:32Z<p>Wc593: </p>
<hr />
<div>Author: Aaron Wheeler, Chang Wei, Cagla Deniz Bahadir, Ruobing Shui, Ziqiu Zhang (ChemE 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Network flow problems arise in several key instances and applications within society and have become fundamental problems within computer science, operations research, applied mathematics, and engineering. Developments in the approach to tackle these problems resulted in algorithms that became the chief instruments for solving problems related to large-scale systems and industrial logistics. Spurred by early developments in linear programming, the methods for addressing these extensive problems date back several decades and they evolved over time as the use of digital computing became increasingly prevalent in industrial processes. Historically, the first instance of an algorithmic development for the network flow problem came in 1956, with the network simplex method formulated by George Dantzig.<sup>[1]</sup> A variation of the simplex algorithm that revolutionized linear programming, this method leveraged the combinatorial structure inherent to these types of problems and demonstrated incredibly high accuracy.<sup>[2]</sup> This method and its variations would go on to define the embodiment of the algorithms and models for the various and distinct network flow problems discussed here.<br />
<br />
== Theory, Methodology, and Algorithms ==<br />
The network flow problem can be conceptualized as a directed graph which abides by flow capacity and conservation constraints. The vertices in the graph are classified into origins (source <math>X</math>), destinations (sink <math>O</math>), and intermediate points and are collectively referred to as nodes (<math>N</math>). These nodes are different from one another such that <math>N_i \neq X,O,\ldots N_j</math>.<sup>[3]</sup> The edges in the directed graph are the directional links between nodes and are referred to as arcs (<math>A</math>). These arcs are defined with a specific direction <math>(i, j)</math> that corresponds to the nodes they are connecting. The arcs <math>A\subseteq (i,j)</math> are also defined by a specific flow capacity <math>c(A)>0</math> that cannot be exceeded. The supply and demand of units <math>\Sigma_i u_i=0~for~i\in N</math> are formulated by negative and positive flow notation, and are defined such that sources equate to positive values (supply) and sinks equate to negative values (demand). Intermediate nodes have no net supply or demand. Figure 1 illustrates this general definition of the network.<br />
[[File:Picture1.png|thumb|Figure 1. General Network Flow Problem]]<br />
<br />
Additional constraints of the network flow optimization model place limits on the solution and vary significantly based on the specific type of problem being solved. Historically, the classic network flow problems are considered to be the maximum flow problem and the minimum-cost circulation problem, the assignment problem, bipartite matching problem, transportation problem, and the transshipment problem.<sup>[2]</sup> The approach to these problems become quite specific based upon the problem’s objective function but can be generalized by the following iterative approach: 1. determining the initial basic feasible solution; 2. checking the optimality conditions (i.e. whether the problem is infeasible, unbounded over the feasible region, optimal solution has been found, etc.); and 3. constructing an improved basic feasible solution if the optimal solution has not been determined.<sup>[3]</sup><br />
=== General Applications ===<br />
<br />
==== The Assignment Problem ====<br />
Various real-life instances of assignment problems exist for optimization, such as assigning a group of people to different tasks, events to halls with different capacities, rewards to a team of contributors, and vacation days to workers. All together, the assignment problem is a bipartite matching problem in the kernel. <sup>[3]</sup> In a classical setting, two types of objects of equal amount are bijective (i.e. they have one-to-one matching), and this tight constraint ensures a perfect matching. The objective is to minimize the cost or maximize the profit of matching, since different items of two types have distinct affinity. [[File:Assignment.png|thumb|Figure 2. Classic model of assignment problem|alt=|267x267px]]A classic example is as follows: suppose there are <math> n </math> people (set <math> P </math>) to be assigned to <math> n </math> tasks (set <math> T </math>). Every task has to be completed and each task has to be handled by only one person, and <math> c_{ij} </math>, usually given by a table, measures the benefits gained by assigning the person <math> i </math> (in <math> P </math>) to the task <math> j </math> (in <math> T </math>). <sup>[4]</sup> The natural objective here is to maximize the overall benefits by devising the optimal assignment pattern. A graph of the general assignment problem and a table of preference are depicted as Figure 2 and Table 2.<br />
{| class="wikitable sortable"<br />
|+Table 1. Table of preference<br />
!Benefits<br />
!Task 1<br />
! Task 2<br />
!Task 3<br />
!...<br />
!Task n<br />
|-<br />
!Person 1<br />
|0<br />
|3<br />
|5<br />
|...<br />
|2<br />
|-<br />
!Person 2<br />
|2<br />
|1<br />
|3<br />
|...<br />
|6<br />
|-<br />
!Person 3<br />
|1<br />
|4<br />
|0<br />
|...<br />
|3<br />
|-<br />
!...<br />
|...<br />
|...<br />
|...<br />
|...<br />
|...<br />
|-<br />
!Person n<br />
|0<br />
|2<br />
|3<br />
|...<br />
|3<br />
|}<br />
Figure 2 can be viewed as a network. The nodes represent people and tasks, and the edges represent potential assignments between a person and a task. Each task can be completed by any person. However, the person that actually ends up being assigned to the task will be the lone individual who is best suited to complete. In the end, the edges with positive flow values will be the only ones represented in the finalized assignment. <sup>[5]</sup><br />
<br />
To approach this problem, the binary variable <math> x_{ij} </math> is defined as whether the person <math> i </math> is assigned to the task <math> j </math>. If so, <math> x_{ij} </math> = 1, and <math> x_{ij} </math> = 0 otherwise.<br />
<br />
The concise-form formulation of the problem is as follows <sup>[3]</sup>:<br />
<br />
max <math>z=\sum_{i=1}^n\sum_{j=1}^n c_{ij}x_{ij}</math><br />
<br />
Subject to:<br />
<br />
<math>\sum_{j=1}^n x_{ij}=1~~\forall i\in [1,n]<br />
</math><br />
<br />
<math>\sum_{I=1}^n x_{ij}=1~~\forall j\in [1,n]<br />
</math><br />
<br />
<math>x_{ij}=0~or~1~~\forall i,j\in [1,n] </math><br />
<br />
<br />
<br />
The first constraint captures the requirement of assigning each person to a single task. The second constraint indicates that each task must be done by exactly one person. The objective function sums up the overall benefits of all assignments.<br />
<br />
To see the analogy between the assignment problem and the network flow, we can describe each person supplying a flow of 1 unit and each task demanding a flow of 1 unit, with the benefits over all “channels” being maximized. <sup>[3]</sup><br />
<br />
A potential issue lies in the branching of the network, specifically an instance where a person splits its one unit of flow into multiple tasks and the objective remains maximized. This shortcoming is allowed by the laws that govern the network flow model, but are unfeasible in real-life instances. Fortunately, since the network simplex method only involves addition and subtraction of a single edge while transferring the basis, which is served by the spanning tree of the flow graph, if the supply (the number of people here) and the demand (the number of tasks here) in the constraints are integers, the solved variables will be automatically integers even if it is not explicitly stated in the problem. This is called the integrality of the network problem, and it certainly applies to the assignment problem. <sup>[6]</sup><br />
<br />
==== The Transportation Problem ====<br />
People first came up with the transportation problem when distributing troops during World War II. <sup>[7]</sup> Now, it has become a useful model for solving logistics problems, and the objective is usually to minimize the cost of transportation. <br />
<br />
Consider the following scenario:<br />
<br />
There are 2 chemical plants located in 2 different places: <math> M </math> and <math> N </math>. There are 3 raw material suppliers in other 3 locations: <math> F </math>, <math> G </math>, and <math> H </math>. The amount of materials from a supplier can be arbitrarily divided into two parts and shipped to two factories. Supplier <math> F </math>, <math> G </math>, and <math> H </math> can provide <math> S_1 </math>, <math> S_2 </math>, and <math> S_3 </math> amounts of materials respectively. The chemical plants located at <math> M </math> and <math> N </math> have the material demand of <math> D_1 </math> and <math> D_2 </math> respectively. Each transportation route, from suppliers to chemical plants, is attributed with a specific cost. This model raises the question: to keep the chemical plants running, what is the best way to arrange the material from the suppliers so that the transportation cost could be minimized? <br />
[[File:Transportation problem example.png|thumb|Figure 3. Transportation problem example]]<br />
Several quantities should be defined to help formulate the frame of the solution:<br />
<br />
<math>S_{i} <br />
</math> = the amount of material provided at the supplier <math>i <br />
</math><br />
<br />
<math>D_{j} <br />
</math> = the amount of material being consumed at the chemical plant <math>j <br />
</math><br />
<br />
<math>x_{ij} <br />
</math> = the amount of material being transferred from supplier <math>i <br />
</math> to chemical plant <math display="inline">j <br />
</math><br />
<br />
<math>C_{ij} <br />
</math> = the cost of transferring 1 unit of material from supplier <math>i <br />
</math> to chemical plant <math>j <br />
</math> <br />
<br />
<math>x_{ij} <br />
</math><math>C_{ij} <br />
</math> = the cost of the material transportation from <math>i <br />
</math> to <math>j <br />
</math><br />
<br />
Here, the amount of material being delivered and being consumed is bound to the supply and demand constraints:<br />
<br />
(1): The amount of material shipping from supplier <math>i <br />
</math> cannot exceed the amount of material available at supplier <math>i <br />
</math>. <br />
<br />
<math>\sum_j^n x_{ij}\ \leq S_{i} \qquad \forall i\in I=[1,m] <br />
</math><br />
<br />
(2): The amount of material arrived at chemical plant <math>j <br />
</math> should at least fulfill the demand at chemical plant <math>j <br />
</math>. <br />
<br />
<math>\sum_i^m x_{ij}\ \geq D_{j} \qquad \forall j\in J=[1,n] <br />
</math><br />
<br />
The objective is to find the minimum cost of transportation, so the cost of each transportation line should be added up, and the total cost should be minimized. <br />
<br />
<math>\sum_i^m \sum_j^n x_{ij}\ C_{ij} <br />
</math><br />
<br />
Using the definitions above, the problem can be formulated as such:<br />
<br />
min<math> \quad z = \sum_i^m \sum_j^n x_{ij}\ C_{ij}<br />
<br />
</math><br />
<br />
<math>s.t. \quad\ \sum_j^n x_{ij}\ \leq S_{i} \qquad \forall i\in I=[1,m] <br />
</math><br />
<br />
<math>\sum_i^m x_{ij}\ \geq D_{j} \qquad \forall j\in J=[1,n] <br />
</math><br />
<br />
However, the problem is not complete at this point because there is no constraint for <math>x_{ij} <br />
</math>, and that means <math>x_{ij} <br />
</math> can be any number, even negative. In order for <math>x_{ij} <br />
</math> to make sense physically, a lower bound of zero is mandatory, which corresponds to the situation where no material was transported from <math>i <br />
</math> to <math>j <br />
</math>. Adding the last constraint will complete this formulation as such:<br />
<br />
min<math> \quad z = \sum_i^m \sum_j^n x_{ij}\ C_{ij}<br />
<br />
</math><br />
<br />
<math>s.t. \quad\ \sum_j^n x_{ij}\ \leq S_{i} \qquad \forall i\in I=[1,m] <br />
</math><br />
<br />
<math>\sum_i^m x_{ij}\ \geq D_{j} \qquad \forall j\in J=[1,n] <br />
</math><br />
<br />
<math>x_{ij}\ \geq 0 <br />
</math><br />
<br />
The problem and the formulation is adapted from Chapter 8 of the book: Applied Mathematical Programming by Bradley, Hax and Magnanti. <sup>[3]</sup><br />
<br />
==== The Shortest-Path Problem ====<br />
The shortest-path problem can be defined as finding the path that yields the shortest total distance between the origin and the destination. Each possible stop is a node and the paths between these nodes are edges incident to these nodes, where the path distance becomes the weight of the edges. In addition to being the most common and straightforward application for finding the shortest path, this model is also used in various applications depending on the definition of nodes and edges. <sup>[3]</sup> For example, when each node represents a different object and the edge specifies the cost of replacement, the equipment replacement problem is derived. Moreover, when each node represents a different project and the edge specifies the relative priority, the model becomes a project scheduling problem.<br />
[[File:Shortest-Path.png|thumb|443x443px|Figure 4. General form of shortest-path problem]]<br />
A graph of the general shortest-path problem is depicted as Figure 4:<br />
<br />
In the general form of the shortest-path problem, the variable <math> x_{ij} </math> represents whether the edge <math> (i,j) </math> is active (i.e. with a positive flow), and the parameter <math> c_{ij} </math> (e.g. <math> c_{12} </math> = 6) defines the distance of the edge <math> (i,j) </math>. The general problem is formulated as below:<br />
<br />
min <math>z=\sum_{i=1}^n \sum_{j=1}^n c_{ij}x_{ij}</math><br />
<br />
Subject to:<br />
<br />
<math>\sum_{j=1}^n x_{ij} - \sum_{k=1}^n x_{ki} = \begin{cases} 1 & \text{if }i=s\text{ (source)} \\ 0 & \text{otherwise} \\ -1 & \text{if }i=t \text{ (sink)} \end{cases}</math><br />
<br />
<math>x_{ij}\geq 0~~\forall (i,j)\in E</math><br />
<br />
<br />
The first term of the constraint is the total outflow of the node i, and the second term is the total inflow. So, the formulation above could be seen as one unit of flow being supplied by the origin, one unit of flow being demanded by the destination, and no net inflow or outflow at any intermediate nodes. These constraints mandate a flow of one unit, amounting to the active path, from the origin to the destination. Under this constraint, the objective function minimizes the overall path distance from the origin to the destination.<br />
<br />
Similarly, the integrality of the network problem applies here, precluding the unreasonable fractioning. With supply and demand both being integer (one here), the edges can only have integer amount of flow in the result solved by simplex method. <sup>[6]</sup><br />
<br />
In addition, the point-to-point model above can be further extended to other problems. A number of real life scenarios require visiting multiple places from a single starting point. This “Tree Problem” can be modeled by making small adjustments to the original model. In this case, the source node should supply more units of flow and there will be multiple sink nodes demanding one unit of flow. Overall, the objective and the constraint formulation are similar. <sup>[4]</sup><br />
<br />
==== Maximal Flow Problem ====<br />
This problem describes a situation where the material from a source node is sent to a sink node. The source and sink node are connected through multiple intermediate nodes, and the common optimization goal is to maximize the material sent from the source node to the sink node. <sup>[3]</sup><br />
<br />
Consider the following scenario:<br />
[[File:Picture2.png|thumb|Figure 5. Maximal flow problem example]]<br />
The given structure is a piping system. The water flows into the system from the source node, passing through the intermediate nodes, and flows out from the sink node. There is no limitation on the amount of water that can be used as the input for the source node. Therefore, the sink node can accept an unlimited amount of water coming into it. The arrows denote the valid channel that water can flow through, and each channel has a known flow capacity. What is the maximum flow that the system can take?<br />
<br />
Several quantities should be defined to help formulate the frame of the solution: <br />
[[File:Picture3.png|thumb|Figure 6. For every intermediate node j, there is a group of node i and a group of node k.]]<br />
For any intermediate node <math display="inline">j <br />
</math> in the system, it receives water from adjacent node(s) <math>i <br />
</math>, and sends water to the adjacent node(s) <math display="inline">k<br />
<br />
</math>. The node <math>i <br />
</math> and k are relative to the node <math display="inline">j <br />
</math>. <br />
<br />
<math>i <br />
</math> = the node(s) that gives water to node <math display="inline">j <br />
</math><br />
<br />
<math display="inline">j <br />
</math> = the intermediate node(s) <br />
<br />
<math display="inline">k<br />
<br />
</math> = the node(s) that receives the water coming out of node <math display="inline">j <br />
</math><br />
<br />
<math>x_{ij} <br />
</math> = amount of water leaving node <math>i <br />
</math> and entering node <math display="inline">j <br />
</math> (<math>i <br />
</math> and <math display="inline">j <br />
</math> are adjacent nodes)<br />
<br />
<math>x_{jk} <br />
</math> = amount of water leaving node <math display="inline">j <br />
</math> and entering node <math display="inline">k<br />
<br />
</math> (<math>i <br />
</math> and <math display="inline">k<br />
<br />
</math> are adjacent nodes)<br />
<br />
<br />
For the source and sink node, they have net flow that is non-zero:<br />
<br />
<math display="inline">m<br />
</math> = source node<br />
<br />
<math display="inline">n<br />
</math> = sink node<br />
<br />
<math>x_{in} <br />
</math> = amount of water leaving node <math>i <br />
</math> and entering sink node <math display="inline">n<br />
</math> (<math>i <br />
</math> and <math display="inline">n<br />
</math> are adjacent nodes)<br />
<br />
<math>x_{mk} <br />
</math> = amount of water leaving source node <math display="inline">m<br />
</math> and entering node <math display="inline">k<br />
<br />
</math> (<math display="inline">m<br />
</math> and <math display="inline">k<br />
<br />
</math> are adjacent nodes)<br />
<br />
<br />
Flow capacity definition is applied to all nodes (including intermediate nodes, the sink, and the source):<br />
<br />
<math>C_{ab} <br />
</math> = transport capacity between any two nodes <math display="inline">a<br />
</math> and <math display="inline">b<br />
</math> (<math display="inline">a<br />
</math><math> \neq<br />
</math><math display="inline">b<br />
</math>)<br />
<br />
<br />
The main constraints for this problem are the transport capacity between each node and the material conservation:<br />
<br />
(1): The amount of water flowing from any node <math display="inline">a<br />
</math> to node <math display="inline">b<br />
</math> should not exceed the flow capacity between node <math display="inline">a<br />
</math> to node <math display="inline">b<br />
</math> . <br />
<br />
<math>0\leq x_{ab} \leq C_{ab} <br />
</math><br />
<br />
(2): The intermediate node <math display="inline">j <br />
</math> does not hold any water, so the amount of water that flows into node <math display="inline">j <br />
</math> has to exit the node with the exact same amount it entered with. <br />
<br />
<math>\sum_i^px_{ij}- \sum_k^r x_{jk} =0<br />
\qquad \begin{cases} \forall i\in I=[1,p] \\ \forall j\in J=[1,q]\\ \forall k\in K=[1,r] \end{cases} <br />
</math><br />
<br />
Overall, the net flow out of the source node has to be the same as the net flow into the sink node. This net flow is the amount that should be maximized. <br />
<br />
Using the definitions above:<br />
[[File:Picture4.png|thumb|Figure 7. The imaginary flow connects the sink node to the source node, creating a close loop.]]<br />
min<math> \quad z = \sum_k^r x_{uk}<br />
<br />
</math> (or <math>\sum_i^p x_{iv}<br />
<br />
</math>)<br />
<br />
<math>s.t. \quad\ \sum_i^px_{ij}- \sum_k^r x_{jk} =0<br />
\qquad \begin{cases} \forall i\in I=[1,p] \\ \forall j\in J=[1,q]\\ \forall k\in K=[1,r] \end{cases} <br />
</math><br />
<br />
<math>0\leq x_{ab} \leq C_{ab} <br />
</math><br />
<br />
This expression can be further simplified by introducing an imaginary flow from the sink to the source. <br />
<br />
By introducing this imaginary flow, the piping system is now closed. The mass conservation constraint now also holds for the source and sink node, so they can be treated as the intermediate nodes. The problem can be rewritten as the following: <br />
<br />
min<math> \quad z = x_{vu}<br />
<br />
</math><br />
<br />
<math>s.t. \quad\ \sum_i^px_{ij}- \sum_k^r x_{jk} =0<br />
\qquad \begin{cases} \forall i\in I=[1,p] \\ \forall j\in J=[1,q+2]\\ \forall k\in K=[1,r] \end{cases} <br />
</math><br />
<br />
<math>0\leq x_{ab} \leq C_{ab} <br />
</math><br />
<br />
The problem and the formulation are derived from an example in Chapter 8 of the book: Applied Mathematical Programming by Bradley, Hax and Magnanti. <sup>[3]</sup><br />
<br />
=== Algorithms ===<br />
<br />
==== Ford–Fulkerson Algorithm ====<br />
A broad range of network flow problems could be reduced to the max-flow problem. The most common way to approach the max-flow problem in polynomial time is the Ford-Fulkerson Algorithm (FFA). FFA is essentially a greedy algorithm and it iteratively finds the augmenting s-t path to increase the value of flow. The pathfinding terminates until there is no s-t path present. Ultimately, the max-flow pattern in the network graph will be returned. <sup>[8]</sup><br />
<br />
Typically, FFA is applied to flow networks with only one source node s and one sink node t. In addition, the capacity conditions and the conservation conditions, which are two properties defining the flow, must be satisfied.<sup>[9]</sup> The capacity conditions require that each edge carry a flow that is no more than its capacity, or <math>0\leq f(e)\leq c_{e},\forall e\in E</math>, where function f returns the flow on a certain edge. The conservation conditions require all nodes except the source and the sink to have a net flow of 0, or ,<math>\sum_{e~into~v}f(v)= \sum_{e~out~of~v}f(v),\forall v\in V-{s,t} </math>. <br />
<br />
FFA introduces the concept of residue graph based on the original graph <math>G</math> to allow backtracking, or pushing backward on edges that are already carrying flow.<sup>[9]</sup> The residue graph <math>G_{f} </math>is defined as the following:<br />
<br />
1. <math>G_{f}</math>has exactly the same node set as <math>G</math>.<br />
<br />
2. For each edge <math>e = (u,v)</math>with a nonnegative flow <math> f( e)</math> in <math>G</math>, <math>G_{f}</math>has the edge e with the capacity <math>c(e)_{f} = c_{e} - f(e)</math>, and also <math>G_f</math> has the edge <math>e' = (v,u)</math> with the capacity <math>c(e')_{f} = f(e)</math>.<br />
<br />
Note that initially, the <math>G_{f} </math> is identical to <math>G</math> since there is no flow present in <math>G</math>.<br />
<br />
The steps of FFA are as below. <sup>[10]</sup> Essentially, the method repeatedly finds a path with positive flow in the residue graph, and updates the flow graph and residue graph until <math>s</math> and <math>t</math> become disjoint in the residue graph.<br />
<br />
1. Set <math>f(e) = 0, \forall e\in E</math>in <math>G</math>, and create a copy as <math>G_{f}</math>.<br />
<br />
2. While there is still a <math>s, t</math> path <math>p</math> in <math>G_{f}</math>:<br />
<br />
a. Find <math>c_{f}(p) = min(c_{f}(e):e\in p)</math><br />
<br />
b. For each edge <math>e\in p</math>:<br />
<br />
bi. <math>f(e) = f(e) + c_{f}(p)</math> if <math>e\in E</math> in <math>G</math>, <math>f(e) = f(e) - c_{f}(p)</math> if <math>e'\in E</math> in <math>G</math><br />
<br />
bii. <math>c(e)= c(e) - c_{f}(p),c(e')= c(e') + c_{f}(p)</math> in <math> G_{f}</math><br />
<br />
[[File:Phase 1.png|thumb|Figure 8: Flow graph and residue graph at the first phase]]<br />
An example of running the FFA is as below.<br />
The flow graph <math>G</math> and residue graph<math>G_{f}</math> at the initial phase is depicted in Figure 8, where the number of each edge in the flow graph is the flow units on the edge, whereas it is the updated edge capacity in the residue graph.<br />
<br />
In the residue graph, an <math>s-t</math> path can be found in the residue graph tracing the edge <math>s\rightarrow A\rightarrow B\rightarrow t</math> with the flow of two units. After augmenting the path on both graphs, the flow graph and the residue graph look like the Figure 9.<br />
<br />
[[File:Phase 2.png|thumb|Figure 9: Flow graph and residue graph after updating with the first s,t-path]]<br />
<br />
At this stage, there is still <math>s,t</math>-path in the residue graph <math>s\rightarrow B\rightarrow A\rightarrow t</math> with a flow of one unit. After augmenting the path on both graphs, the flow graph and the residue graph look like the Figure 10.<br />
<br />
[[File:Phase 3.png|thumb|Figure 10: Flow graph and residue graph after augmenting with the second s,t-path]]<br />
<br />
At this stage, there is no more <math>s,t</math>-path in the residue graph, so FFA terminates and the maximum flow can be read from the flow graph as 3 units.<br />
<br />
== Numerical Example and Solution ==<br />
<br />
A Food Distributor Company is farming and collecting vegetables from farmers to later distribute to the grocery stores. The distributor has specific agreements with different third-party companies to mediate the delivery to the grocery stores. In a particular month, the company has 600 ton vegetables to deliver to the grocery store. They have agreements with two third-party transport companies A and B, which have different tariffs for delivering goods between themselves, the distributor, and the grocery store. They also have limits on transport capacity for each path. These delivery points are numbered as shown below, with path 1 being the transport from the Food Distributor Company to the transport company A. The limits and tariffs for each path can be found in the Table 2 below, and the possible transportation connections between the distributor company, the third-party transporters, and the grocery store are shown in the figure below. The distributor companies cannot hold any amount of food, and any incoming food should be delivered to an end point. The distributor company wants to minimize the overall transport cost of shipping 600 tons of vegetables to the grocery store by choosing the optimal path provided by the transport companies. How should the distributor company map out their path and the amount of vegetables carried on each path to minimize cost overall?<br />
[[File:Wiki example.png|thumb|Figure. 11. Illustration of the network for the food distribution problem.]]<br />
{| class="wikitable"<br />
|+Table 2. Product Limits and Tariffs for each Path<br />
|<br />
|1<br />
|2<br />
|3<br />
|4<br />
|5<br />
|6<br />
|-<br />
|Product limit (ton)<br />
|250<br />
|450<br />
|350<br />
|200<br />
|300<br />
|500<br />
|-<br />
|Tariff ($/ton)<br />
|10<br />
|12.5<br />
|5<br />
|7.5<br />
|10<br />
|20<br />
|}<br />
<br />
<br />
This question is adapted from one of the exercise questions in chapter 8 of the book: Applied Mathematical Programming by Bradley, Hax and Magnanti <sup>[3]</sup>.<br />
<br />
=== Formulation of the Problem ===<br />
The problem can be formulated as below where variables <math>x_1, x_2, x_3,..., x_6</math> denote the tons of vegetables carried in paths 1 to 6. The objective function stated in the first line is to minimize the cost of the operation, which is the summation of the tons of vegetables carried on each path multiplied by the corresponding tariff: <math>\sum_{i=1}^6 x_i t_i</math>. <br />
<br />
<math>\begin{array}{lcl} \min z = 10x_1 + 12.5x_2 + 5x_3 + 7.5x_4 + 10x_5 + 20x_6 \\ s.t. \qquad x_5 = x_1 - x_3 + x_4 \\ \ \ \ \quad \qquad x_6 = x_2 + x_3 - x_4 \\ \ \ \ \quad \qquad x_5 + x_6 = 600 \\ \ \ \ \quad \qquad x_1 + x_2 = 600 \\ \ \ \ \quad \qquad x_1 \leq 250 \\ \ \ \ \quad \qquad x_2 \leq 450 \\ \ \ \ \quad \qquad x_3 \leq 350 \\ \ \ \ \quad \qquad x_4 \leq 200 \\ \ \ \ \quad \qquad x_5 \leq 300 \\ \ \ \ \quad \qquad x_6 \leq 500 \\ \ \ \ \quad \qquad x_1, x_2, x_3, x_4, x_5, x_6 \geq 0\\\end{array}<br />
<br />
<br />
<br />
<br />
</math> <br />
<br />
The second step is to write down the constraints. The first constraint ensures that the net amount present in the Transport Company A, which is the deliveries received from path 1 and path 2 minus the transport to Transport Company B should be delivered to the grocery store with path 5. The second constraint ensures this for the Transport Company B. The third and fourth constraints are ensuring that the total amount of vegetables shipping from the Food Distributor Company and the total amount of vegetables delivered to the grocery store are both 600 tons. The constraints 5 to 10 depict the upper limits of the amount of vegetables that can be carried on paths 1 to 6. The final constraint depicts that all variables are non-negative. <br />
<br />
=== Solution of the Problem ===<br />
This problem can be solved using Simplex Algorithm<sup>[11]</sup> or with the CPLEX Linear Programming solver in GAMS optimization platform. The steps of the solution using the GAMS platform is as follows:<br />
<br />
The first step is to list the variables, which are the tons of vegetables that will be transported in routes 1 to 6. The paths can be denoted as<math>x_1, x_2, x_3,..., x_6</math> . The objective function is the overall cost: z.<br />
<br />
'''variables x1,x2,x3,x4,x5,x6,z;'''<br />
<br />
The second step is to list the equations which are the constraints and the objective function. The objective function is a summation of the amount of vegetables carried in path i, multiplied with the tariff of path i for all i: <math>\sum_{i=1}^6 x_i t_i</math>. The GAMS code for the objective function is written below:<br />
<br />
'''obj.. z=e= 10*x1+12.5*x2+5*x3+7.5*x4+10*x5+20*x6;'''<br />
<br />
Overall, there are 10 constraints in this problem. The constraints 1, and 2 are equations for the paths 5 and 6. The amount carried in path 5 can be found by summing the amount of vegetables incoming to Transport Company A from path 1 and path 4, minus the amount of vegetables leaving Transport Company A with path 3. This can be attributed to the restriction that barrs the companies from keeping any vegetables and requires them to eventually deliver all the incoming produce. The equality 1 ensures that this constraint holds for path 5 and equation 2 ensures it for path 6. A sample of these constraints is written below for path 5:<br />
<br />
'''c1.. x5 =e=x1-x3+x4;'''<br />
<br />
Constraint 3 ensures that the sum of vegetables carried in path 1 and path 2 add to the total of 600 tons of vegetables that leave the Food Distributor Company. Likewise, the constraint 4 ensures that the sum amount of food transported in paths 5 and 6 adds up to 600 tons of vegetables that have to be delivered to the grocery store. A sample of these constraints is written below for the total delivery to the grocery store:<br />
<br />
'''c3.. x5+x6=e=600;'''<br />
<br />
Constraints 5 to 10 should ensure that the amount of food transported in each path should not exceed the maximum capacity depicted in the table. A sample of these constraints is written below for the capacity of path 1:<br />
<br />
'''c5.. x1=l=250;'''<br />
<br />
After listing the variables, objective function and the constraints, the final step is to call the CPLEX solver and set the type of the optimization problem as '''lp''' (linear programming). In this case the problem will be solved with a Linear Programming algorithm to minimize the objective (cost) function.<br />
<br />
The GAMS code yields the results below:<br />
<br />
'''x1 = 250, x2 = 350, x3 = 0, x4 = 50, x5 = 300, x6 = 300, z =16250.'''<br />
<br />
== Real Life Applications ==<br />
Network problems have many applications in all kinds of areas such as transportation, city design, resource management and financial planning.<sup>[6]</sup><br />
<br />
There are several special cases of network problems, such as the shortest path problem, minimum cost flow problem, assignment problem and transportation problem.<sup>[6]</sup> Three application cases will be introduced here.<br />
<br />
=== The Minimum Cost Flow Problem ===<br />
[[File:Pic8.jpg|thumb|Figure. 12. Illustration of the ship subnetwork.<sub>[14]</sub>]]<br />
[[File:Pic9.jpg|thumb|Figure. 13. Illustration of cargo subnetwork.<sub>[14]</sub>]]<br />
Minimum cost flow problems are pervasive in real life, such as deciding how to allocate temporal quay crane in container terminals, and how to make optimal train schedules on the same railroad line.<sup>[12]</sup><br />
<br />
R. Dewil and his group use MCNFP to assist traffic enforcement.<sup>[13]</sup> Police patrol “hot spots”, which are areas where crashes occur frequently on highways. R. Dewil studies a method intended to estimate the optimal route of hot spots. He describes the time it takes to move the detector to a certain position as the cost, and the number of patrol cars from one node to next as the flow, in order to minimize the total cost.<sup>[13]</sup><br />
<br />
=== The Assignment Problem ===<br />
Dung-Ying Lin studies an assignment problem in which he aims to assign freights to ships and arrange transportation paths along the Northern Sea Route in a manner which yields maximum profit.<sup>[14]</sup> Within this network composed of a ship subnetwork and a cargo subnetwork( shown as Figure 12 and Figure 13), each node corresponds to a port at a specific time and each arc represents the movement of a ship or a cargo. Other types of assignment problems are faculty scheduling, freight assignment, and so on.<br />
<br />
=== The Shortest Path Problem ===<br />
Shortest path problems are also present in many fields, such as transportation, 5G wireless communication, and implantation of the global dynamic routing scheme.<sup>[15][16][17]</sup><br />
<br />
Qiang Tu and his group studies the constrained reliable shortest path (CRSP) problem for electric vehicles in the urban transportation network. <sup>[15]</sup> He describes the reliable travel time of path as the objective item, which is made up of planning travel time of path and the reliability item. The group studies the Chicago sketch network consisting of 933 nodes and 2950 links and the Sioux Falls network consisting of 24 nodes and 76 links. The results show that the travelers’ risk attitudes and properties of electric vehicles in the transportation network can have a great influence on the path choice.<sup>[15]</sup> The study can contribute to the invention of the city navigation system.<br />
<br />
== Conclusion ==<br />
Since its inception, the network flow problem has provided humanity with a straightforward and scalable approach for several large-scale challenges and problems. The Simplex algorithm and other computational optimization platforms have made addressing these problems routine, and have greatly expedited efforts for groups concerned with supply-chain and other distribution processes. The formulation of this problem has had several derivations from its original format, but its overall methodology and approach have remained prevalent in several of society’s industrial and commercial processes, even over half a century later. Classical models such as the assignment, transportation, maximal flow, and shortest path problem configurations have found their way into diverse settings, ranging from streamlining oil distribution networks along the gulf coast to arranging optimal scheduling assignments for college students amidst a global pandemic. All in all, the network flow problem and its monumental impact, have made it a fundamental tool for any group that deals with combinatorial data sets. And with the surge in adoption of data-driven models and applications within virtually all industries, the use of the network flow problem approach will only continue to drive innovation and meet consumer demands for the foreseeable future.<br />
<br />
== References ==<br />
1. Karp, R. M. (2008). [https://www.sciencedirect.com/science/article/pii/S1572528607000370/ George Dantzig’s impact on the theory of computation]. Discrete Optimization, 5(2), 174-185.<br />
<br />
2. Goldberg, A. V. Tardos, Eva, Tarjan, Robert E. (1989). [http://www.cs.cornell.edu/~eva/Network.Flow.Algorithms.pdf/ Network Flow Algorithms, Algorithms and Combinatorics]. 9. 101-164.<br />
<br />
3. Bradley, S. P. Hax, A. C., & Magnanti, T. L. (1977). Network Models. [http://web.mit.edu/15.053/www/AMP.htm/ Applied mathematical programming] (p. 259). Reading, MA: Addison-Wesley.<br />
<br />
4. Chinneck, J. W. (2006). [https://www.optimization101.org/ Practical optimization: a gentle introduction. Systems and Computer Engineering]. Carleton University, Ottawa. 11.<br />
<br />
5. Roy, B. V. Mason, K.(2005). [https://web.stanford.edu/~ashishg/msande111/notes/chapter5.pdf/ Formulation and Analysis of Linear Programs, Chapter 5 Network Flows].<br />
<br />
6. Vanderbei, R. J. (2020). [https://www.springer.com/gp/book/9781461476306/ Linear programming: foundations and extensions (Vol. 285)]. Springer Nature.<br />
<br />
7. Sobel, J. (2014). [https://econweb.ucsd.edu/~jsobel/172aw02/notes8.pdf/ Linear Programming Notes VIII: The Transportation Problem].<br />
<br />
8. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 26.2: The Ford–Fulkerson method". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill.<br />
<br />
9. Jon Kleinberg; Éva Tardos (2006). "Chapter 7: Network Flow". Algorithm Design. Pearson Education.<br />
<br />
10. [https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm/ Ford–Fulkerson algorithm]. Retrieved December 05, 2020.<br />
<br />
11. Hu, G. (2020, November 19). [https://optimization.cbe.cornell.edu/index.php?title=Simplex_algorithm#cite_note-11/ Simplex algorithm]. Retrieved November 22, 2020.<br />
<br />
12. Altınel, İ. K., Aras, N., Şuvak, Z., & Taşkın, Z. C. (2019). [https://www.sciencedirect.com/science/article/pii/S0166218X18304815/ Minimum cost noncrossing flow problem on layered networks]. Discrete Applied Mathematics, 261, 2-21.<br />
<br />
13. Dewil, R., Vansteenwegen, P., Cattrysse, D., & Van Oudheusden, D. (2015). [https://core.ac.uk/download/pdf/34613916.pdf/ A minimum cost network flow model for the maximum covering and patrol routing problem]. European Journal of Operational Research, 247(1), 27-36.<br />
<br />
14. Lin, D. Y., & Chang, Y. T. (2018). [https://www.sciencedirect.com/science/article/pii/S1366554517308037/ Ship routing and freight assignment problem for liner shipping: Application to the Northern Sea Route planning problem]. Transportation Research Part E: Logistics and Transportation Review, 110, 47-70.<br />
<br />
15. Tu, Q., Cheng, L., Yuan, T., Cheng, Y., & Li, M. (2020). [https://www.sciencedirect.com/science/article/pii/S095965262031177X/ The Constrained Reliable Shortest Path Problem for Electric Vehicles in the Urban Transportation Network]. Journal of Cleaner Production, 121130.<br />
<br />
16. Guo, Y., Li, S., Jiang, W., Zhang, B., & Ma, Y. (2017). [https://dl.acm.org/doi/abs/10.1016/j.phycom.2017.06.010/ Learning automata-based algorithms for solving the stochastic shortest path routing problems in 5G wireless communication]. Physical Communication, 25, 376-385.<br />
<br />
17. Haddou, N. B., Ez-Zahraouy, H., & Rachadi, A. (2016). [https://www.infona.pl/resource/bwmeta1.element.elsevier-2eaa73bc-4e22-39aa-89b9-71ef2d7e2d63/ Implantation of the global dynamic routing scheme in scale-free networks under the shortest path strategy]. Physics Letters A, 380(33), 2513-2517.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Matrix_game_(LP_for_game_theory)&diff=2726Matrix game (LP for game theory)2020-12-21T11:32:34Z<p>Wc593: </p>
<hr />
<div>Author: David Oswalt (SysEn 5800 Fall 2020)<br />
<br />
== Game Theory and Linear Programming ==<br />
[[File:JohnOskar.png|thumb|John von Neumann (1903–1957) and Oskar Morgenstern (1902–1977)]]<br />
Game theory is a formal language for modeling and analyzing the interactive behaviors of intelligent, rational decision-makers (or players). Game theory provides the mathematical methods necessary to analyze the decisions of two or more players based on their preferences to determine a final outcome. The theory was first conceptualized by mathematician Ernst Zermelo in the early 20th century. However, John von Neumann pioneered modern game theory through his book Theory of Games and Economic Behavior, written alongside co-author Oskar Morgenstern. For this reason, John von Neumann is often credited by historians as the Father of Game Theory.<sup>[1][2]</sup> This theory has provided a framework for approaching complex, high-pressure situations and has a broad spectrum of applications. These applications of game theory have helped shape modern economics and social sciences as we know them today and are discussed in the Applications section below. <br />
<br />
Analyzing game theoretic situations is a practical application of linear programming. These situations can get quite complex mathematically, but one of the simplest forms of game is called the Finite Two-Person Zero-Sum Game (or Matrix Game for short). In a Matrix Game, two players are involved in a competitive situation in which one player’s loss is the other’s gain. Some common terms related to the Matrix Game that will be used throughout this chapter have been defined below:<br />
<br />
* '''Game''' – Any social situation involving two or more individuals.<sup>[2]</sup><br />
* '''Players''' – The individuals involved in a game. In the case of two-person zero-sum games, these players are assumed to be rational and intelligent.<sup>[2]</sup><br />
* '''Rationality''' – A decision maker is considered to be rational if he or she makes decisions consistently in pursuit of his or her own objectives. Assuming a player to be rational implies that said player’s objective is to maximize his or her own payoff.<sup>[2]</sup><br />
* '''Utility''' – The scale upon which a decision’s payoff is measured.<sup>[2]</sup><br />
<br />
Analyzing these games uses John von Neumann’s Minimax Theorem that was derived using the Brouwer Fixed-Point Theorem. However, over time it was proven that the Matrix Game could be solved using Linear Programming along with the Duality Theorem.<sup>[3]</sup> This solution to the Matrix game has been proven in the Theory and Algorithmic Discussion section below.<br />
<br />
== Theory and Algorithmic Discussion ==<br />
Consider a simple two-player zero-sum matrix game called Evens and Odds. In this game, two players each wager $1 before simultaneously showing either one or two fingers. If the sum of the fingers showing is even, player 1 wins the pot for that round ($2). If the sum of the fingers showing is odd, player 2 wins the pot for that round. As with all matrix games, the assumption that both players are rational and intelligent decision makers with the goal of maximizing their own total payoff in each round applies. The expected utility for each player can be defined using a payoff matrix, ''P''. In this payoff matrix, the rows and columns represent the decisions of player 1 and player 2 respectively. The below payoff matrix represents the payoff to player 1 in this matrix game.<br />
<br />
<math>P=\begin{bmatrix} 2 & -2 \\ -2 & 2 \end{bmatrix}</math><br />
<br />
The rows of this payoff matrix indicate the decision made by player 1, and the columns indicate the decision made by player 2. If player 1 puts up one finger (first row) and player 2 puts up 1 finger (first column), then player 1 wins $2. In this example, since each player has an equal ½ probability of throwing one or two fingers, neither player has a distinct advantage. Consider now a less-trivial game where the payoff matrix is no longer evenly distributed, shown below.<br />
<br />
<math>P=\begin{bmatrix} 1 & -2 \\ -3 & 2 \end{bmatrix}</math><br />
<br />
While it may be intuitive that player 2 has the edge in this new game, making this determination is not as clear for much more complicated games. This is where the mathematics behind game theory comes into play. Consider a more general form of a two-person zero-sum game where two players are allowed to pick from a finite set of actions. Let <math>n </math> represent the finite number of actions that player one (or the “row player”) can choose from and <math>i </math> represent the action selected, or <math>i= 1,2,...,n </math>''.'' Likewise, let <math>m </math> represent the finite number of actions that player two (or the “column player”) can choose from and <math>j </math> represent the action selected, or <math>j= 1,2,...,m </math>. The general form of the payoff matrix for a matrix game is now shown below. Note that all positive payments go to the row player and all negative payments go to the column player.<br />
<br />
<math>P = [p_{ij}]</math><br />
<br />
Next, we assume that each player is making a random selection in accordance with a fixed probability distribution. This probability distribution is defined by what is called the ''stochastic vector,'' <math>y</math>. Each component of the stochastic vector, <math>y_i </math>, denotes the probability that the row player selects action <math>i </math>. This stochastic vector is made up of nonnegative probabilities that sum up to one per the fundamental law of probability:<br />
<br />
<math>y \geq 0 \text{ and } e^Ty=1, </math><br />
<br />
where e is a vector of all ones. Likewise, the stochastic vector for the column player can be defined as <math>x </math>, with the probabilities that this player selects action <math>j </math> denoted by<math>x_j </math>. To compute the expected payoff to the column player, the payoff from each outcome <math>(i,j) </math> for all <math>i = 1,2,...,n </math> and <math>j= 1,2,...,m </math> times the probability of that outcome are summed. Thus, the column player’s expected payoff is defined as<br />
<br />
<math>\sum_{i,j}y_ia_{ij}x_j = y^T Px</math>.<br />
<br />
Since we have assumed that our column player acts rationally, we can expect them to act in accordance with the stochastic vector x. In other words, the column player has adopted strategy x. The row player’s best option for defending against strategy x is to adopt strategy y*, in which they act to minimize the column player’s payout:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y^TPx\\<br />
\text{s.t} & ~~ e^Ty=1 \\<br />
& ~~ y \geq 0 \\ <br />
\end{align}</math><br />
<br />
By assuming that our column player acts intelligently, this implies that they are aware of the row player’s strategy to minimize their payoff. Hence, the column player can employ strategy x* that maximizes their payoff given the row player’s strategy y* with the following maximum:<br />
<br />
<math>\max_{x} \min_{y} y^T Px</math> <br />
<br />
The above equation can be solved by reformulating it as a linear program. By taking the inner optimization over the deterministic strategies, this equation can be re-written as: <br />
<br />
<math>\begin{align}<br />
\text{max} & ~~ \text{min}_i e_i ^T Px\\<br />
\text{s.t} & ~~ \sum_{j=1}^n x_{j} = 1\\<br />
& ~~ x_j \geq 0 & ~~ j = 1, 2, ..., n \\<br />
\end{align}</math><br />
<br />
In order to put a lower bound on the minimization term, a new variable ''v'' is introduced. This gives us the following linear program:<br />
<br />
<math>\begin{align}<br />
\text{max} & ~~ v\\<br />
\text{s.t} & ~~ v \leq e_i^T Px & ~~ i = 1, 2, ..., m\\<br />
& ~~ \sum_{j=1}^n = 1 \\<br />
& ~~ x_j \geq 0 & ~~ j = 1, 2, ..., n \\<br />
\end{align}</math><br />
<br />
or in vector notation,<br />
<br />
<math>\begin{align}<br />
\text{max} & ~~ v\\<br />
\text{s.t} & ~~ ve-Px \leq 0\\<br />
& ~~ e^T x =1\\<br />
& ~~ x \geq 0\\<br />
\end{align}</math><br />
<br />
The above max-min linear program governs the column player’s strategy x*. We can use this linear program to determine the row player’s strategy y* by taking the duel to yield a min-max linear program:<br />
<br />
<math>\min_{x} \max_{y} y^T Px</math><br />
<br />
Similarly to the max-min linear program used for the column player’s strategy, the above equation can be reformulated into a linear program by taking the inner optimization over the deterministic strategies and introducing a new variable u:<br />
<br />
<math>\begin{align}<br />
\text{max} & ~~ u\\<br />
\text{s.t} & ~~ ue-P^Ty \leq 0\\<br />
& ~~ e^T y =1\\<br />
& ~~ y \geq 0\\<br />
\end{align}</math><br />
<br />
These linear programs can be solved to find the optimal strategies <math>x*</math> and <math>y*</math>. The Minimax Theorem can now be used to verify that both solutions are consistent with one another. The Minimax Theorem states that there exist stochastic vectors <math>x*</math>and <math>y*</math>for which<br />
<br />
<math>\max_{x} y^{*T} Px = \min_{y} y^T Px^*</math><br />
<br />
In order to prove the Minimax Theorem, we first consider the fact that<br />
<br />
<math>v^* = \min_{i} e_i ^T Px^* = \min_{y} y^T Px*,</math><br />
<br />
and<br />
<br />
<math>u* = \max_{j} e_j ^T P^T y^* = \max_{x} x^T P^T y* = \max_{x} y^{*T}Px</math><br />
<br />
Since the max-min linear program for x* and the min-max linear program for y* are duals of one another, we can assume that v* = u*. Therefore,<br />
<br />
<math>\max_{x} y^{*T} Px = \min_{y} y^T Px^*</math><br />
<br />
By solving the above equation for the optimal values v* = u* yields what is called the value of the game. The value of a game shows how much utility each player can expect to gain or lose on average. In the event that v* = u* = 0, the game is considered to be fair, meaning neither player has a distinct disadvantage. In order to illustrate the power of the minimax theorem in solving matrix games, a numerical example has been provided in the section below.<br />
<br />
== Numerical Example ==<br />
Many decisions made in sports can be modeled as finite two-person zero-sum games. Take, for example, a common dilemma seen in American football. The offense has driven down the field and is just a few short yards of scoring. The team has four plays, or ''downs'', to score. On the third down, the team gets stopped by the defense and is unable to score, leaving only one more play to make it happen. There are two options for scoring. The first is a field goal, in which the team kicks the ball through the uprights for 3 points. The second option is to run a passing or running play for a touchdown, worth 7 points. This is often referred to as a “Fourth and Goal” situation and is a dilemma that play-callers face in most football games. While the option of scoring a touchdown yields a higher payoff, it is a much risker option as running and passing plays are easier to defend against than a field goal. For this reason, football coaches often settle on kicking a field goal on 4<sup>th</sup> down instead of going for it. This anticlimactic end to a long and exciting drive often leaves fans with an unsatisfying feeling, knowing that their team was only a few yards from scoring a touchdown. While kicking the field goal nearly guarantees 3 points, is it smarter to employ a more aggressive strategy and go for the touchdown? Game theory can help determine the strategy that will yield the highest amount of points on average over time.<br />
<br />
There are a few assumptions to be made in order to model this Fourth and Goal Dilemma. The first is that both football teams are ideal. What this means is that if the offense chooses a run play and the defense chooses to defend a run play, then the run will be stopped with zero yards gained. It also means that if the offense chooses a run play and the defense incorrectly chooses to defend a passing play, then the play will be successful with a touchdown scored. We are also assuming that if the offense chooses to kick a field goal, then it is guaranteed to be successful. This is assumed due to the fact that field goals from just a few yards out are very rarely missed. The final assumption is that all other factors contributing to play calling are neglected. This could include situations such as the offense being down 2 points with only a few seconds on the clock, when a field goal for 3 points would be the obvious best strategy. With this strategy in mind, a the payoff to the offense can be outlined as follows:<br />
{| class="wikitable"<br />
|+4th and Goal Dilemma Payoff<br />
!<br />
!<br />
! colspan="3" |Defense<br />
|-<br />
!<br />
!<br />
!Run <br />
!Pass <br />
!FG <br />
|-<br />
| rowspan="3" |'''Offense'''<br />
|'''Run'''<br />
| 0<br />
|7<br />
|7<br />
|-<br />
|'''Pass'''<br />
|7<br />
| 0<br />
|7<br />
|-<br />
|'''FG'''<br />
|3<br />
|3<br />
|3<br />
|}<br />
The above payoff table can also be depicted by the following payoff matrix, <math>P</math>, where the columns represent the defensive team's actions and the rows represent the offensive team's actions.<br />
<br />
<math>P = \begin{bmatrix} 0 & 7 & 7 \\ 7 & 0 & 7 \\ 3 & 3 & 3 \end{bmatrix}</math><br />
<br />
In order to determine their optimal strategy, the offense must solve the below linear program:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ w \\<br />
\text{s.t.} & ~~ \begin{bmatrix} 0 & 7 & 7 & 1 \\ 7 & 0 & 7 & 1\\ 3 & 3 & 3 & 1\\ 1 & 1 & 1 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ w \end{bmatrix} \begin{matrix} \geq \\ \geq \\ \geq \\ = \end{matrix} \begin{bmatrix} 0 \\ 0 \\ 0 \\ w \end{bmatrix} \\<br />
\end{align}</math><br />
<br />
The above linear program has been solved using a concrete model with the GLPK solver package in ''Pyomo'', a Python-based computational optimization modeling language. The solution shows that the the offense should adopt the following strategy to maximize the amount of points scored on average over time:<br />
<br />
<math>x^* = \begin{bmatrix} 0.50 \\ 0.50 \\ 0 \end{bmatrix}</math><br />
<br />
Using the stochastic vector <math>y^*</math>defined above, the value of the game can be computed:<br />
<br />
<math>w^* = 3.5</math><br />
<br />
This means that if the offense runs a pass play 50% of the time, runs a running play 50% of the time and never chooses to kick the field goal, they can expect a payout of at least 3.5 points on average over time. This scenario, while vastly oversimplified, demonstrates the power of applying linear programming to determine optimal strategies in finite two-person zero-sum games. It also demonstrates that it pays dividends to make aggressive play-calling decisions in sports such as football.<br />
<br />
== Other Applications of the Matrix Game ==<br />
The rise of game theory spanned the time frame in which both World War I and World War II occurred, so naturally one of the earliest applications was in developing winning military strategies. Game theory was used to make high-pressure decisions on attack and defense strategies that optimized their impact within a set of constraints. The Battle of Bismarck Sea between Japanese and American forces in 1943 is one of the most historic examples of game theory in this context. In this battle, the US Air Force analyzed an attack situation using a two-person zero-sum game to maximize the amount of time they had to bomb a Japanese naval fleet, given the limited information they had about the convoy’s route. This demonstrates the fact that the word “game” in “game theory” can be misleading. Not all applications of game theory are fun games and many applications can have serious consequences. <br />
<br />
One of the other earlier applications of game theory was in economics. This ended up growing into one of the more significant applications of game theory and has formed modern economics as we know it today. The theory played a major role in the development of many sub-disciplines of economics, such as industrial organization, international trade, labor economics, and macroeconomics.<sup>[1]</sup> As game theory matured, its applications expanded into various fields of social science, including political science, international relations, philosophy, sociology and anthropology. It is also used in biology and computer science. To this day, economics remains the most prominent application of game theory.<br />
<br />
== Conclusion ==<br />
Situations modeled as finite two-person zero-sum games, or ''Matrix Games,'' tend to be oversimplified and not have much practical use. However, solving matrix games using linear programming is merely an introduction into the power of analyzing stochastic decision making using computational optimization methods. Game theory has revolutionized the world's approach to disciplines such as economics, war, intelligence, biology, computer science, political science and many more. The methods used to solve game theoretic models continue to evolve and will subsequently continue to change the way decision makers approach the world around us. <br />
<br />
== References ==<br />
[1] Bonanno, Giacomo. ''Game Theory''. 2nd ed., CreateSpace Independent Publishing Platform, 2015.<br />
<br />
[2] Myerson, Roger B. ''Game Theory Analysis of Conflict''. Harvard University Press, 2013.<br />
<br />
[3] Vanderbei, Robert J. ''Linear Programming: Foundations and Extensions''. 2nd ed., Kluwer, 2004.<br />
<br />
[4] “Blog: Five Early AI Geniuses: John Von Neumann.” ''Tim McCloud'', 19 June 2019, timmccloud.net/blog-5%E2%80%8A-%E2%80%8Aearly-ai-geniuses-john-von-neumann-and-chess/.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Optimization_with_absolute_values&diff=2725Optimization with absolute values2020-12-21T11:31:56Z<p>Wc593: </p>
<hr />
<div>Authors: Matthew Chan (mdc297), Yilian Yin (yy896), Brian Amado (ba392), Peter Williams (pmw99), Dewei Xiao (dx58) (SysEn 5800 Fall 2020)<br />
<br />
== Introduction ==<br />
Absolute values can make it relatively difficult to determine the optimal solution when handled without first converting to standard form. This conversion of the objective function is a good first step in solving optimization problems with absolute values. As a result, one can go on to solve the problem using linear programing techniques. With the addition of a new variable (ex: <math>\textstyle X^a </math>) in the objective function the problem is considered nonlinear. Additional constraints must be added to find the optimal solution.<br />
<br />
== Method ==<br />
<br />
=== Defining Absolute Values ===<br />
An absolute value of a real number can be described as its distance away from zero, or the non-negative magnitude of the number. <ref> Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. https://books.google.com/books?id=A8hAm38zsCMC&pg=PA2#v=onepage&q&f=false </ref> Thus,<br />
<br />
<math>\displaystyle |x|={\begin{cases}-x,&{\text{if }}x<0\\x,&{\text{if }}x\geq 0\end{cases}}</math><br />
<br />
Absolute values can exist in linear optimization problems in two primary instances: in constraints and in the objective function. <ref> "Absolute Values." ''lp_solve'', http://lpsolve.sourceforge.net/. Accessed 20 Nov. 2020. </ref><br />
<br />
=== Absolute Values in Constraints ===<br />
Within constraints, absolute value relations can be transformed into one of the following forms:<br />
<br />
<math> \begin{align}<br />
|X| &= 0 \\<br />
|X| &\le C \\<br />
|X| &\ge C<br />
\end{align} </math><br />
<br />
Where <math>\textstyle X</math> is a linear combination (<math>\textstyle ax_1 + bx_2 + ...</math> where <math>\textstyle a, b</math> are constants) and <math>\textstyle C</math> is a constant <math>\textstyle > 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| = 0</math> ====<br />
In this form, the only possible solution is if <math>\displaystyle X = 0</math> simplifying the constraint. Note that this solution also occurs if the constraint is in the form <math>\displaystyle |X| \le 0</math> due to the same conclusion that the only possible solution is <math>\textstyle X = 0</math>.<br />
<br />
==== Form when <math>\displaystyle |X| \le C</math> ====<br />
The second form a linear constraint can exist in is <math>\displaystyle |X|\leq C</math>. In this case, an equivalent feasible solution can be described by splitting the constraint into two:<br />
<br />
<math> \begin{align}<br />
X &\leq C \\<br />
-X &\leq C<br />
\end{align} </math><br />
<br />
The solution can be understood visually since <math>\textstyle X</math> must lie between <math>\textstyle -C</math> and <math>\textstyle C</math>, as shown below:<br />
<br />
[[File:Number Line X Less Than C.png|none|thumb]]<br />
<br />
==== Form when <math>\displaystyle |X| \ge C</math> ====<br />
Visually, the solution space for the last form is the complement of the second solution above, resulting in the following representation:[[File:Number Line for X Greater Than C.png|none|thumb]]In expression form, the solutions can be written as:<br />
<br />
<math> \begin{align}<br />
X &\geq C \\<br />
-X &\geq C<br />
\end{align} </math><br />
<br />
As seen visually, the feasible region has a gap and thus non-convex. The expressions also make it impossible for both to simultaneously hold true. This means that it is not possible to transform constraints in this form to linear equations. <ref> ''Optimization Methods in Management Science / Operations Research.'' Massachusetts Institute of Technology, Spring 2013, https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/tutorials/MIT15_053S13_tut04.pdf. Accessed 20 Nov. 2020. </ref> <br />
<br />
An approach to reach a solution for this particular case exists in the form of Mixed-Integer Linear Programming, where only one of the equations above is “active”.<br />
<br />
The inequality can be reformulated into the following:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge C \\<br />
-&X + N*(1-Y) \ge C \\<br />
&Y = 0, 1 <br />
\end{align} </math><br />
<br />
With this new set of constraints, a large constant <math>\textstyle N</math> is introduced, along with a binary variable <math>\textstyle Y</math>. So long as <math>\textstyle N</math> is sufficiently larger than the upper bound of <math>\textstyle X + C</math>, the large constant multiplied with the binary variable ensures that one of the constraints must be satisfied. For instance, if <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge C \\<br />
-&X + N \ge C<br />
\end{align} </math><br />
<br />
Since <math>\textstyle N</math> is sufficiently large, the latter constraint will always be satisfied, leaving only one relation active: <math>\textstyle X \ge C</math>. Functionally, this allows for the XOR logical operation of <math>\textstyle X \geq C</math> and <math>\textstyle -X \geq C</math>.<br />
<br />
=== Absolute Values in Objective Functions ===<br />
In objective functions, to leverage transformations of absolute functions, all constraints must be linear.<br />
<br />
Similar to the case of absolute values in constraints, there are different approaches to the reformation of the objective function, depending on the satisfaction of sign constraints. The satisfaction of sign constraints is when the coefficient signs of the absolute terms must all be either:<br />
<br />
* Positive for a minimization problem<br />
* Negative for a maximization problem<br />
<br />
==== Sign Constraints are Satisfied ====<br />
At a high level, the transformation works similarly to the second case of absolute value in constraints – aiming to bound the solution space for the absolute value term with a new variable, <math>\textstyle Z</math>.<br />
<br />
If <math>\textstyle |X|</math> is the absolute value term in our objective function, two additional constraints are added to the linear program:<br />
<br />
<math> \begin{align}<br />
&X\leq Z \\<br />
-&X\leq Z <br />
\end{align} </math><br />
<br />
The <math>\textstyle |X|</math> term in the objective function is then replaced by <math>\textstyle Z</math>, relaxing the original function into a collection of linear constraints.<br />
<br />
==== Sign Constraints are Not Satisfied ====<br />
In order to transform problems where the coefficient signs of the absolute terms do not fulfill the conditions above, a similar conclusion is reached to that of the last case for absolute values in constraints – the use of integer variables is needed to reach an LP format. <br />
<br />
The following constraints need to be added to the problem:<br />
<br />
<math> \begin{align}<br />
&X + N*Y \ge Z \\<br />
-&X + N*(1-Y) \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z \\<br />
&Y = 0, 1<br />
\end{align} </math><br />
<br />
Again, <math>\textstyle N</math> is a large constant, <math>\textstyle Z</math> is a replacement variable for <math>\textstyle |X|</math> in the objective function, and <math>\textstyle Y</math> is a binary variable. The first two constraints ensure that one and only one constraint is active while the other will be automatically satisfied, following the same logic as above. The third and fourth constraints ensure that <math>\textstyle Z</math> must be equal to <math>\textstyle |X|</math> and has either a positive or negative value. For instance, for the case of <math>\textstyle Y = 0</math>, the new constraints will resolve to:<br />
<br />
<math> \begin{align}<br />
&X \ge Z \\<br />
-&X + N \ge Z \\<br />
&X \le Z \\<br />
-&X \le Z<br />
\end{align} </math><br />
<br />
As <math>\textstyle N</math> is sufficiently large (<math>\textstyle N</math> must be at least <math>\textstyle 2|X|</math> for this approach), the second constraint must be satisfied. Since <math>\textstyle Z</math> is non-negative, the fourth constraint must also be satisfied. The remaining constraints, <math>\textstyle X \ge Z</math> and <math>\textstyle X \le Z</math> can only be satisfied when <math>\textstyle Z = X</math> and is of non-negative signage. Together, these constraints will allow for the selection of the largest <math>\textstyle |X|</math> for maximization problems (or smallest for minimization problems).<br />
<br />
=== Absolute Values in Nonlinear Optimization Problems ===<br />
The addition of a new variable <math> (X_a) </math> to an objective function with absolute value quantities forms a nonlinear optimization problem. The absolute value quantities would require that the problem be reformatted before proceeding. Additional constraints must be added to account for the added variable.<br />
<br />
==Numerical Example==<br />
'''Example when All Sign Constraints are Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| + |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> -U_1 \le x_1 \le U_1 </math><br />
<br />
<math> -U_2 \le x_2 \le U_2 </math><br />
<br />
<math> -U_3 \le x_3 \le U_3 </math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 + U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
-&U_3 \le x_3 \le U_3 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>3.5</math>, which occurs when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
'''Example when Sign Constraints are not Satisfied'''<br />
<br />
<math> \begin{align}<br />
\min \quad &{2|x_1| + 3|x_2| - |x_3|} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14<br />
\end{align}</math><br />
<br />
The absolute value quantities will be replaced with single variables:<br />
<br />
<math>|x_1| = U_1 </math><br />
<br />
<math>|x_2| = U_2</math><br />
<br />
<math>|x_3| = U_3</math><br />
<br />
We must introduce additional constraints to ensure we do not lose any information by doing this substitution:<br />
<br />
<math> \begin{align}<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1<br />
\end{align}</math><br />
<br />
The problem has now been reformulated as a linear programming problem that can be solved normally:<br />
<ref> Shanno, David F., and Roman L. Weil. “'Linear' Programming with Absolute-Value Functionals.” Operations Research, vol. 19, no. 1, 1971, pp. 120–124. Accessed 13 Dec. 2020. JSTOR, www.jstor.org/stable/168871. </ref><br />
<br />
<math> \begin{align}<br />
\min \quad &{ 2U_1 + 3U_2 - U_3} \\<br />
s.t. \quad &x_1 + 2x_2 - 3x_3 \le 8 \\<br />
&2x_1 - x_2 + 4x_3= 14 \\<br />
-&U_1 \le x_1 \le U_1 \\<br />
-&U_2 \le x_2 \le U_2 \\<br />
&x_3 + M*Y \ge U_3 \\<br />
-&x_3 + M*(1-Y) \ge U_3 \\<br />
&x_3 \le U_3 \\<br />
-&x_3 \le U_3 \\<br />
&Y = 0,1 <br />
\end{align}</math><br />
<br />
The optimum value for the objective function is <math>-3.5</math>, which occur when <math>x_1 = 0 </math> and <math>x_2 = 0 </math> and <math>x_3 = 3.5 </math>.<br />
<br />
== Applications ==<br />
<br />
<br />
Consider the problem <math>Ax=b; \quad max \quad z= x c,jx,i</math>. This problem cannot, in general, be solved with the simplex method. The problem has a simplex method solution (with unrestricted basis entry) only if c, are nonpositive (non-negative for minimizing problems).<br />
<br />
The primary application of absolute-value functionals in linear programming has been for absolute-value or L(i)-metric regression analysis. Such application is always a minimization problem with all C(j) equal to 1 so that the required conditions for valid use of the simplex method are met. <br />
<br />
By reformulating the original problem into a Mixed-Integer Linear Program (MILP), we can utilize known programs to solve for the optimal solution(s). <br />
<br />
=== Application in Financial: Portfolio Selection===<br />
Under this topic, the same tricks played in the Numerical Example section to perform '''Reduction to a Linear Programming Problem''' will be applied here again, to reform the problem into a MILP in order to solve the problem. An example is given as below. <br />
<br />
<br />
<br />
A portfolio is determined by what fraction of one's assets to put into each investment. <ref> Vanderbei R.J. (2008) Financial Applications. In: Linear Programming. International Series in Operations Research & Management Science, vol 114. Springer, Boston, MA. <nowiki>https://doi.org/10.1007/978-0-387-74388-2_13</nowiki> https://link.springer.com/chapter/10.1007/978-0-387-74388-2_13 </ref> It can be denoted as a collection of nonnegative numbers <math>\textstyle x_j</math>, where <math> j = 1, 2,...,n </math>. Because each <math> \textstyle x_j </math>stands for a portion of the assets, it sums to one. In order to get a highest reward through finding a right mix of assets, let <math>\mu</math>, the positive parameter, denote the importance of risk relative to the return, and <math>/textstyle Rj</math> denote the return in the next time period on investment <math>j, j = 1, 2,..., n</math>. The total return one would obtain from the investment is <math>R = \sum_{j}\!x_j\!R_j </math>. The expected return is <math>\mathbb{E}\!R = \sum_{j}\!x_j\mathbb{E}\!R_j </math>. And the Mean Absolute Deviation from the Mean (MAD) is <math>\mathbb{E}\left\vert \!R - \mathbb{E}\!R \right\vert = \mathbb{E}\left\vert \sum_{j}\!x_j\tilde{R}_j \right\vert </math>. <br />
<br />
<br />
maximize <math display="inline">\mu\sum_j\!x_j\mathbb{E}\!R_j - \mathbb{E}\left\vert \sum_j \!x_j\tilde{R}_j \right\vert </math> <br />
<br />
subject to <math>\sum_j\!x_j = 1</math> <br />
<br />
<math>x_j \geq 0</math> , <math> j = 1,2,..n.</math> <br />
<br />
where <math>\tilde{R}_j = \!R_j - \mathbb{E}\!R_j </math><br />
<br />
<br />
<br />
Very obviously, this problem is not a linear programming problem yet. Similar to the numerical example showed above, the right thing to do is to replace each absolute value with a new variable and impose inequality constraints to ensure that the new variable is the appropriate absolute value once an optimal value is obtained. To simplify the program, an average of the historical returns can be taken in order to get the mean expected return: <math>r_j = \mathbb{E}\!R_j = \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!R_j(t)<br />
</math>. Thus the objective function is turned into: <math>\mu\sum_{j}\!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math><br />
<br />
Now, replace <math>\left\vert \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \right\vert <br />
</math> with a new variable <math>y_t<br />
</math>and thus the problem can be rewrote as:<br />
<br />
<br />
maximize <math>\mu \sum_j \!x_j\!r_j - \left ( \frac{1}{T} \right ) \sum_{t=1}^T \!y_t<br />
<br />
</math><br />
<br />
subject to <math>-\!y_t \leq \sum_{j} \!x_j \bigl(R_j (t) - \!r_j\bigr) \leq y_t <br />
</math>. t = 1, 2,...,T<br />
<br />
where <math>\sum_j \!x_j = 1<br />
<br />
</math><br />
<br />
<math>x_j\geq 0<br />
<br />
</math>. j = 1, 2,...,n<br />
<br />
<math>y_t \geq 0<br />
<br />
</math>. t = 1, 2,...,T<br />
<br />
<br />
So finally, after some simplifications methods and some tricks applied, the original problem is converted into a linear programming which is easier to be solved further.<br />
<br />
<br />
===Data Transfer Rate===<br />
Another application of optimization with absolute values is data transfer rate. Faster-than-nyquist, or FTNS, is a framework to transmit signals beyond the Nyquist rate. The refence to this section proposed a 24.7% faster symbol rate by utilizing Sum-of-Absolute-Values optimization. <ref>Sasahara, Hampei & Hayashi, Kazunori & Nagahara, Masaaki. (2016). Symbol Detection for Faster-Than-Nyquist Signaling by Sum-of-Absolute-Values Optimization. IEEE Signal Processing Letters. PP. 1-1. 10.1109/LSP.2016.2625839. https://www.researchgate.net/publication/309745511_Symbol_Detection_for_Faster-Than-Nyquist_Signaling_by_Sum-of-Absolute-Values_Optimization </ref><br />
<br />
The initial model is defined as follows:<br />
<math>\displaystyle x_0 (t) = \sum^N_{n=1} x_{n,0} h_n (t), t \in [0,T] </math><br />
<br />
where t ∈ R denotes the continuous time index, N ∈ N is the number of transmitted symbols in each transmission period, T > 0 is the interval of one period, <math>x_{n,0}</math> ∈ {+1, −1} are independent and identically distributed (i.i.d.) binary symbols [i.e., binary phase shift keying (BPSK)], and <math>h_n (t) (n = 1,...,N) </math> are the modulation pulses.<br />
<br />
Reformulated as a convex optimization problem and repeating Newton’s method with absolute values, the solution approximates can be achieved:<br />
<math>\displaystyle \min_{z \in R^N} (\lambda \Vert y - Hz \Vert^2_2 + \frac{1}{2} \Vert z - 1_N \Vert_1 + \frac{1}{2} \Vert z + 1_N \Vert_1 ) </math><br />
<br />
== Conclusion ==<br />
The presence of an absolute value within the objective function prevents the use of certain optimization methods. Solving these problems requires that the function be manipulated in order to continue with linear programming techniques like the simplex method. The applications of optimization with absolute values range from the financial sector to the digital world where data transfer rates can be improved as well as improving portfolio returns. The way these problems are formulated, must take absolute values into account in order to model the problem correctly. The absolute values inherently make these problems non-linear so determining the most optimal solutions is only achievable after reformulating them into linear programs.<br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Interior-point_method_for_LP&diff=2724Interior-point method for LP2020-12-21T11:31:15Z<p>Wc593: </p>
<hr />
<div>Authors: Tomas Lopez Lauterio, Rohit Thakur and Sunil Shenoy (SysEn 5800 Fall 2020) <br><br />
<br />
== Introduction ==<br />
Linear programming problems seek to optimize linear functions given linear constraints. There are several applications of linear programming including inventory control, production scheduling, transportation optimization and efficient manufacturing processes. Simplex method has been a very popular method to solve these linear programming problems and has served these industries well for a long time. But over the past 40 years, there have been significant number of advances in different algorithms that can be used for solving these types of problems in more efficient ways, especially where the problems become very large scale in terms of variables and constraints.<ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <ref> "Linear Programming - Foundations and Extensions - 3<sup>rd</sup> edition''" by Robert J Vanderbei, ISBN-113: 978-0-387-74387-5. </ref> In early 1980s Karmarkar (1984) <ref> N Karmarkar, "A new Polynomial - Time algorithm for linear programming", Combinatorica, VOl. 4, No. 8, 1984, pp. 373-395.</ref> published a paper introducing interior point methods to solve linear-programming problems. A simple way to look at differences between simplex method and interior point method is that a simplex method moves along the edges of a polytope towards a vertex having a lower value of the cost function, whereas an interior point method begins its iterations inside the polytope and moves towards the lowest cost vertex without regard for edges. This approach reduces the number of iterations needed to reach that vertex, thereby reducing computational time needed to solve the problem.<br><br><br />
<br />
=== Lagrange Function ===<br />
Before getting too deep into description of Interior point method, there are a few concepts that are helpful to understand. First key concept to understand is related to Lagrange function. Lagrange function incorporates the constraints into a modified objective function in such a way that a constrained minimizer <math> (x^{*}) </math> is connected to an unconstrained minimizer <math> \left \{x^{*},\lambda ^{*} \right \} </math> for the augmented objective function <math> L\left ( x , \lambda \right ) </math>, where the augmentation is achieved with <math> 'p' </math> Lagrange multipliers. <ref> Computational Experience with Primal-Dual Interior Point Method for Linear Programming''" by Irvin Lustig, Roy Marsten, David Shanno </ref><ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <br><br />
To illustrate this point, consider a simple an optimization problem:<br><br />
minimize <math> f\left ( x \right ) </math><br><br />
subject to: <math> A \cdot x = b </math><br><br />
where, <math> A \, \in \, R^{p\, \times \, n} </math> is assumed to have a full row rank<br />
Lagrange function can be laid out as:<br><br />
<math>L(x, \lambda ) = f(x) + \sum_{i=1}^{p}\lambda _{i}\cdot a_{i}(x)</math> <br><br />
where, <math> '\lambda ' </math> introduced in this equation is called Lagrange Multiplier. <br><br><br />
=== Newton's Method ===<br />
Another key concept to understand is regarding solving linear and non-linear equations using Newton's methods. <br />
Assume an unconstrained minimization problem in the form: <br><br />
minimize <math> g\left ( x \right ) </math> , where <math> g\left ( x \right ) </math> is a real valued function with <math> 'n' </math> variables. <br><br />
A local minimum for this problem will satisfy the following system of equations:<br><br />
<math>\left [ \frac{\partial g(x)}{\partial x_{1}} ..... \frac{\partial g(x)}{\partial x_{n}}\right ]^{T} = \left [ 0 ... 0 \right ]</math> <br><br />
<br />
The Newton's iteration looks like: <br><br />
<math>x^{k+1} = x^{k} - \left [ \nabla ^{2} g(x^{k}) \right ]^{-1}\cdot \nabla g(x^{k})</math> <br><br />
<br><br />
<br />
== Theory and algorithm ==<br />
[[File:Visualization.png|685x685px|Visualization of Central Path method in Interior point|thumb]]<br />
<br />
Given a linear programming problem with constraint equations that have inequality terms, the inequality term is typically replaced with an equality term using slack variables. The new reformulation can be discontinuous in nature and to replace the discontinuous function with a smoother function, a logarithmic form of this reformulation is utilized. This nonlinear objective function is called "''Logarithmic Barrier Function''"<br />
The process involves starting with formation of a primal-dual pair of linear programs and then using "''Lagrangian function''" form on the "''Barrier function''" to convert the constrained problems into unconstrained problems. These unconstrained problems are then solved using Newton's method as shown above.<br><br />
<br />
=== Problem Formulation ===<br />
<br />
<br />
Consider a combination of primal-dual problem below:<br><br />
('''Primal Problem formulation''') <br><br />
→ minimize <math> c^{T}x </math> <br><br />
Subject to: <math> Ax = b </math> and <math> x \geq 0 </math> <br><br />
('''Dual Problem formulation''') <br><br />
→ maximize <math> b^{T}y </math> <br><br />
Subject to: <math> A^{T}y + \lambda = c </math> and <math> \lambda \geq 0 </math> <br><br />
<math> '\lambda ' </math> vector introduced represents the slack variables.<br><br />
<br />
The Lagrangian functional form is used to configure two equations using "''Logarithmic Barrier Function''" for both primal and dual forms mentioned above:<br><br />
Lagrangian equation for Primal using Logarithm Barrier Function : <math> L_{p}(x,y) = c^{T}\cdot x - \mu \cdot \sum_{j=1}^{n}log(x_{j}) - y^{T}\cdot (Ax - b) </math> <br><br />
Lagrangian equation for Dual using Logarithm Barrier Function : <math> L_{d}(x,y,\lambda ) = b^{T}\cdot y + \mu \cdot \sum_{j=1}^{n}log(\lambda _{j}) - x^{T}\cdot (A^{T}y +\lambda - c) </math> <br><br />
<br />
Taking the partial derivatives of L<sub>p</sub> and L<sub>d</sub> with respect to variables <math> 'x'\; '\lambda'\; 'y' </math>, and forcing these terms to zero, we get the following equations: <br><br />
<math> Ax = b </math> and <math> x \geq 0 </math> <br><br />
<math> A^{T}y + \lambda = c </math> and <math> \lambda \geq 0 </math> <br><br />
<math> x_{j}\cdot \lambda _{j} = \mu </math> for ''j''= 1,2,.......''n'' <br><br />
<br />
where, <math> '\mu ' </math> is strictly positive scaler parameter. For each <math> \mu > 0 </math> , the vectors in the set <math> \left \{ x\left ( \mu \right ), y\left ( \mu \right ) , \lambda \left ( \mu \right )\right \} </math> satisfying above equations, can we viewed as set of points in <math> R^{n} </math> , <math> R^{p} </math>, <math> R^{n} </math> respectively, such that when <math> '\mu ' </math> varies, the corresponding points form a set of trajectories called ''"Central Path"''. The central path lies in the ''"Interior"'' of the feasible regions. There is a sample illustration of ''"Central Path"'' method in figure to right. Starting with a positive value of <math> '\mu ' </math> and as <math> '\mu ' </math> approaches 0, the optimal point is reached. <br><br />
<br />
Let Diagonal[...] denote a diagonal matrix with the listed elements on its diagonal.<br />
Define the following:<br><br />
'''X''' = Diagonal [<math> x_{1}^{0}, .... x_{n}^{0} </math>]<br><br />
<math> \lambda </math> = Diagonal (<math> \lambda _{1}^{0}, .... \lambda _{n}^{0} </math> )<br><br />
'''e<sup>T</sup>''' = (1 .....1) as vector of all 1's.<br><br />
Using these newly defined terms, the equation above can be written as: <br><br />
<math> X\cdot \lambda \cdot e = \mu \cdot e </math> <br><br />
<br />
=== Iterations using Newton's Method ===<br />
Employing the Newton's iterative method to solve the following equations: <br><br />
<math> Ax - b = 0 </math> <br><br />
<math> A^{T}y + \lambda = c </math> <br><br />
<math> X\cdot \lambda \cdot e - \mu \cdot e = 0</math> <br><br />
With definition of starting point that lies within feasible region as <math> \left ( x^{0},y^{0},\lambda ^{0} \right ) </math> such that <math> x^{0}> 0 \, and \lambda ^{0}> 0 </math>.<br />
Also defining 2 residual vectors for both the primal and dual equations: <br><br />
<math> \delta _{p} = b - A\cdot x^{0} </math> <br><br />
<math> \delta _{d} = c - A^{0}\cdot y^{0} - \lambda ^{0} </math> <br><br />
<br />
Applying Newton's Method to solve above equations: <br><br />
<math> \begin{bmatrix}<br />
A & 0 & 0\\ <br />
0 & A^{T} & 1\\ <br />
\lambda & 0 & X<br />
\end{bmatrix} \cdot \begin{bmatrix}<br />
\delta _{x}\\ <br />
\delta _{y}\\ <br />
\delta _{\lambda }<br />
\end{bmatrix} = \begin{bmatrix}<br />
\delta _{p}\\ <br />
\delta _{d}\\ <br />
\mu \cdot e - X\cdot \lambda \cdot e<br />
\end{bmatrix}<br />
</math><br><br />
So a single iteration of Newton's method involves the following equations. For each iteration, we solve for the next value of <math> x^{k+1},y^{k+1},\lambda ^{k+1} </math>: <br><br />
<math> (A\lambda ^{-1}XA^{T})\delta _{y} = b- \mu A\lambda^{-1} + A\lambda ^{-1}X\delta _{d} </math> <br><br />
<math> \delta _{\lambda} = \delta _{d}\cdot A^{T}\delta _{y} </math> <br><br />
<math> \delta _{x} = \lambda ^{-1}\left [ \mu \cdot e - X\lambda e -\lambda \delta _{z}\right ] </math> <br><br />
<math> \alpha _{p} = min\left \{ \frac{-x_{j}}{\delta _{x_{j}}} \right \} </math> for <math> \delta x_{j} < 0 </math> <br><br />
<math> \alpha _{d} = min\left \{ \frac{-\lambda_{j}}{\delta _{\lambda_{j}}} \right \} </math> for <math> \delta \lambda_{j} < 0 </math> <br><br><br />
<br />
The value of the the following variables for next iteration (+1) is determined by: <br><br />
<math> x^{k+1} = x^{k} + \alpha _{p}\cdot \delta _{x} </math> <br><br />
<math> y^{k+1} = y^{k} + \alpha _{d}\cdot \delta _{y} </math> <br><br />
<math> \lambda^{k+1} = \lambda^{k} + \alpha _{d}\cdot \delta _{\lambda} </math> <br><br />
<br />
The quantities <math> \alpha _{p} </math> and <math> \alpha _{d} </math> are positive with <math> 0\leq \alpha _{p},\alpha _{d}\leq 1 </math>. <br><br />
After each iteration of Newton's method, we assess the duality gap that is given by the expression below and compare it against a small value <big>ε</big> <br><br />
<math> \frac{c^{T}x^{k}-b^{T}y^{k}}{1+\left | b^{T}y^{k} \right |} \leq \varepsilon </math> <br><br />
The value of <big>ε</big> can be chosen to be something small 10<sup>-6</sup>, which essentially is the permissible duality gap for the problem. <br><br />
<br />
== Numerical Example ==<br />
<br />
Maximize<br><br />
<math> 3X_{1} + 3X_{2} </math><br><br />
<br />
such that<br><br />
<math> X_{1} + X_{2} \leq 4, </math><br> <br />
<math> X_{1} \geq 0, </math><br><br />
<math> X_{2} \geq 0, </math><br><br />
<br />
Barrier form of the above primal problem is as written below:<br />
<br />
<br />
<math> P(X,\mu) = 3X_{1} + 3X_{2} + \mu.log(4-X_{1} - X_{2}) + \mu.log(X_{1}) + \mu.log(X_{2})</math><br> <br />
<br />
<br />
The Barrier function is always concave, since the problem is a maximization problem, there will be one and only one solution. In order to find the maximum point on the concave function we take a derivate and set it to zero. <br />
<br />
Taking partial derivative and setting to zero, we get the below equations<br />
<br />
<br />
<math> \frac{\partial P(X,\mu)}{\partial X_{1}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{1}} = 0</math> <br><br />
<br />
<math> \frac{\partial P(X,\mu)}{\partial X_{2}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{2}} = 0</math> <br><br />
<br />
Using above equations the following can be derived: <math> X_{1} = X_{2}</math> <br><br />
<br />
Hence the following can be concluded<br />
<br />
<math> 3 - \frac{\mu}{(4-2X_{1})} + \frac{\mu}{X_{1}} = 0 </math><br><br />
<br />
<br />
The above equation can be converted into a quadratic equation as below:<br />
<br />
<math> 6X_{1}^2 - 3X_{1}(4-\mu)-4\mu = 0</math><br><br />
<br />
<br />
The solution to the above quadratic equation can be written as below:<br />
<br />
<math> X_{1} = \frac{3(4-\mu)\pm(\sqrt{9(4-\mu)^2 + 96\mu} }{12} = X_{2}</math><br><br />
<br />
<br />
Taking only take the positive value of <math> X_{1} </math> and <math> X_{2} </math> from the above equation as <math> X_{1} \geq 0 </math> and <math> X_{2} \geq 0</math> we can solve <math>X_{1}</math> and <math>X_{2}</math> for different values of <math>\mu</math>. The outcome of such iterations is listed in the table below. <br />
<br />
{| class="wikitable"<br />
|+ Objective & Barrier Function w.r.t <math>X_{1}</math>, <math>X_{2}</math> and <math>\mu</math><br />
|-<br />
! <math>\mu</math> !! <math>X_{1}</math> !! <math>X_{2}</math> !! <math>P(X, \mu)</math> !! <math>f(x)</math><br />
|-<br />
| 0 || 2 || 2 || 12 || 12<br />
|-<br />
| 0.01 || 1.998 || 1.998 || 11.947 || 11.990 <br />
|-<br />
| 0.1 || 1.984 || 1.984 || 11.697 || 11.902 <br />
|-<br />
| 1 || 1.859 || 1.859 || 11.128 || 11.152 <br />
|-<br />
| 10 || 1.486 || 1.486 || 17.114 || 8.916 <br />
|-<br />
| 100 || 1.351 || 1.351 || 94.357 || 8.105 <br />
|-<br />
| 1000 || 1.335 || 1.335 || 871.052 || 8.011 <br />
|}<br />
<br />
From the above table it can be seen that: <br />
# as <math>\mu</math> gets close to zero, the Barrier Function becomes tight and close to the original function. <br />
# at <math>\mu=0</math> the optimal solution is achieved.<br />
<br />
<br />
Summary:<br />
Maximum Value of Objective function <math>=12</math> <br><br />
Optimal points <math>X_{1} = 2 </math> and <math>X_{2} = 2</math><br />
<br />
The Newton's Method can also be applied to solve linear programming problems as indicated in the "Theory and Algorithm" section above. The solution to linear programming problems as indicated in this section "Numerical Example", will be similar to quadratic equation as obtained above and will converge in one iteration.<br />
<br />
== Applications ==<br />
Primal-Dual interior-point (PDIP) methods are commonly used in optimal power flow (OPF), in this case what is being looked is to maximize user utility and minimize operational cost satisfying operational and physical constraints. The solution to the OPF needs to be available to grid operators in few minutes or seconds due to changes and fluctuations in loads during power generation. Newton-based primal-dual interior point can achieve fast convergence in this OPF optimization problem. <ref> A. Minot, Y. M. Lu and N. Li, "A parallel primal-dual interior-point method for DC optimal power flow," 2016 Power Systems Computation Conference (PSCC), Genoa, 2016, pp. 1-7, doi: 10.1109/PSCC.2016.7540826. </ref><br />
<br />
Another application of the PDIP is for the minimization of losses and cost in the generation and transmission in hydroelectric power systems. <ref> L. M. Ramos Carvalho and A. R. Leite Oliveira, "Primal-Dual Interior Point Method Applied to the Short Term Hydroelectric Scheduling Including a Perturbing Parameter," in IEEE Latin America Transactions, vol. 7, no. 5, pp. 533-538, Sept. 2009, doi: 10.1109/TLA.2009.5361190. </ref> <br />
<br />
PDIP are commonly used in imaging processing. One these applications is for image deblurring, in this case the constrained deblurring problem is formulated as primal-dual. The constrained primal-dual is solved using a semi-smooth Newton’s method. <ref> D. Krishnan, P. Lin and A. M. Yip, "A Primal-Dual Active-Set Method for Non-Negativity Constrained Total Variation Deblurring Problems," in IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2766-2777, Nov. 2007, doi: 10.1109/TIP.2007.908079. </ref><br />
<br />
PDIP can be utilized to obtain a general formula for a shape derivative of the potential energy describing the energy release rate for curvilinear cracks. Problems on cracks and their evolution have important application in engineering and mechanical sciences. <ref> V. A. Kovtunenko, Primal–dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration, IMA Journal of Applied Mathematics, Volume 71, Issue 5, October 2006, Pages 635–657 </ref><br />
<br />
== Conclusion ==<br />
<br />
The primal-dual interior point method is a good alternative to the simplex methods for solving linear programming problems. The primal dual method shows superior performance and convergence on many large complex problems. simplex codes are faster on small to medium problems, interior point primal-dual are much faster on large problems.<br />
<br />
<br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Computational_complexity&diff=2723Computational complexity2020-12-21T11:30:25Z<p>Wc593: </p>
<hr />
<div>Authors: Steve Bentioulis, Augie Bravo, Will Murphy (SysEn 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
<blockquote>''“The subject of my talk is perhaps most directly indicated by simply asking two questions: first, is it harder to multiply than to add? and second, why?...there is no algorithm for multiplication computationally as simple as that for addition, and this proves something of a stumbling block” - Alan Cobham, 1965'' <ref>[https://www.cs.toronto.edu/~sacook/homepage/cobham_intrinsic.pdf A. Cobham, The intrinsic computational difficulty of functions], in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland Publishing Company, Amsterdam, 1965, p. 24-30 </ref></blockquote><br />
Computational complexity refers to the amount of resources needed to solve a problem. Complexity increases as the amount of resources required increases. While this notion may seem straightforward enough, computational complexity has profound impacts. <br />
<br />
The quote above from Alan Cobham is some of the earliest thinking on defining computational complexity and set the stage for defining problems based on complexity classes to indicate the feasibility of computational problems.<br />
<br />
Additionally, the theory of computational complexity is in its infancy and has only been studied in earnest starting in the 20<sup>th</sup> century. <br />
<br />
== Theory, Methodology ==<br />
The concept of computation has evolved since the advent of the standard universal electronic computer and the associated widespread societal adoption. And while the electronic computer is often synonymous with computation, it is important to remember that computation is a major scientific concept irrespective of whether it is conducted by machine, man, or otherwise.<br />
<br />
When studying computation, a key area of interest is understanding what problems are, in fact, computable. Researchers have shown that some tasks are inherently incomputable in that no computer can solve them without going into infinite loops on certain inputs. This phenomenon begs the question how do you determine if a problem can be computed, moreover, for those problems that can be computed how can you calculate the complexity associated with computing the answer.<br />
<br />
The focus of computational complexity is the measure of computational efficiency quantifying the amount of computational resources required to solve a problem.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref><br />
<br />
Within the study of computational complexity there exists the notion of a complexity class defined as a set of functions that can be computed within given resource bounds.<ref>Du, D., & Ko, K.-I. (2014). Theory of computational complexity. (Second edition.). Hoboken, New Jersey: Wiley. Retrieved from http://onlinelibrary.wiley.com/book/10.1002/9781118595091</ref><br />
<br />
=== Class P ===<br />
Class P computational complexity problems refer to those that can solved in polynomial running time, where “P” stands for polynomial and running time is a function of the number of bits in the input.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref> <br />
<br />
A complexity class refers to a specific decision problem rather than generic types of problems. For example, it is not acceptable to state that integer multiplication is in class P. Rather you must state the specific decision problem, e.g. the product of 3 and 5 is a class P problem.<br />
<br />
Furthermore, the running time is defined by minutes or nanoseconds, but refers to the number of operations to be performed to resolve or verify an answer to a problem. Running time is a function of the number of bits being input into the decision problem. This allows us to ignore the efficiency of the machine running the computation and judge the complexity of the decision problem solely by the merits of such a problem.<br />
<br />
=== Class NP ===<br />
NP stands for nondeterministic polynomial time, originally referring to nondeterministic Turing machines (NDTM) in which the Turing machine has two transition functions and the computer arbitrarily determines which transition function to apply.<br />
<br />
Complexity class NP consists of problems that can be efficiently verified within a running time upper bounded by polynomial function. Verifiability is the concept that if given a potential solution to the problem it can be confirmed or rejected.<br />
<br />
==== Class NP-hard and NP-complete ====<br />
The NP-hard complexity class is a subcategory of the NP complexity class that defines those problems that are at least as hard as any other language in NP. If P ≠ NP, then NP-hard problems cannot be decided in polynomial time. See P vs NP on this page. <br />
<br />
NP-complete refers to those problems within the NP complexity-class that are the hardest problems to solve within the NP class. Examples of NP-complete problems include Independent Set, [https://optimization.cbe.cornell.edu/index.php?title=Traveling_salesman_problem Traveling Salesperson], Subset Sum, and Integer Programming problems. The implication of these problems is that they are not in P unless P = NP.<br />
<br />
=== P vs NP ===<br />
The difference between class P and class NP computational complexity is illustrated simply by considering a Sudoku puzzle. Ask yourself, is it easier to solve a Sudoku puzzle or verify whether an answer to a Sudoku puzzle is correct? Class P refers to computational complexity problems that can be efficiently solved. Class NP refers to those problems which have answers that can be efficiently verified. The answer to a Sudoku problem can be efficiently verified and for that reason is considered a class NP complexity problem.<br />
<br />
This then begs the question that for every class NP problem, i.e. those that can be efficiently verified, does that mean they can also be efficiently solved? If so, then P = NP. However, we have not yet been able to prove that P = NP and thus the implications that P ≠ NP must also be considered.<br />
<br />
The importance of understanding P vs NP is the subject of much discussion and has even sparked competition in the scientific community. The problem of P vs NP was selected by the Clay Mathematics Institute (CMI) of Cambridge, Massachusetts as one of seven most difficult and important problems to be solved at the dawn of the 21<sup>st</sup> century. A prize of $1 million has been allocated for anyone that can bring forward a solution.<ref>Clay Mathematics Institute, The Millennium Prize Problems. Retrieved from http://https://www.claymath.org/millennium-problems/millennium-prize-problems</ref><br />
<br />
=== Methodology ===<br />
The methodology for determining computational complexity is built upon the notion of a Turing machine and quantifying the number of computational operations the machine is to perform to resolve or verify a problem. A straight-forward approach would be to quantify the number of operations required considering every possible input to the Turing machine’s algorithm. This approach is referred to as worst-case complexity as it is the most possible number of operations to be performed in order to solve the problem.<br />
<br />
However, critics of worst-case complexity highlight that in practice the worst-case behaviors of algorithms may never actually be encountered, and the worst-case approach can be unnecessarily cumbersome. As an alternative, average-case analysis seeks to design efficient algorithms that apply to most real-life instances. An important component of average-case analysis is the concept of an average-graph distribution of the inputs. There are several approaches to determining the average-graph including randomization. An average-case problem consists of both a decision problem and an average-graph distribution of inputs, implying that the complexity of a decision problem can vary with the inputs.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref> <br />
<br />
== Numerical Example ==<br />
The efficiency of a computation problem is measured by the operations executed to solve, not the seconds (or years) required to solve it. The number of operations executed is a function of input size and arrangement. The big O notation is used to determine an algorithm’s complexity class according to the number of operations it performs as a function of input.<ref>Mohr, A. Quantum Computing in Complexity Theory and Theory of Computation (PDF). Retrieved from http://www.austinmohr.com/Work_files/complexity.pdf</ref><br />
<br />
The notation O(n) is used where ‘O’ refers to the order of a function and ‘n’ represents the size of the input.<ref>A. Mejia, How you can change the world by learning Data Structures and Algorithms. Retrieved from: https://towardsdatascience.com/how-you-can-change-the-world-by-learning-data-structures-and-algorithms-84566c1829e3</ref><br />
<br />
An example of an O(1) problem includes determining whether one number is odd or even. The algorithm reads a bit of input and performs one operation to determine whether or not the number is odd or even. No matter how large or small the quantity of inputs the number of operations holds constant at 1; for that reason this is an O(1) problem.<br />
<br />
An example of O(n) problem includes identifying the minimum input within an unsorted array. To compute this problem the computer must read every bit of input to determine whether or not it is less than the prior bit of input. For this reason, the number of operations is linearly correlated to the quantity of inputs. For example, the decision problem of finding the minimum of {5,9,3,2,7,1,4} requires the computer to check every element in the array. This array has n=7 inputs, so it will require 7 operations to read each bit an determine if it is less than the prior bit. This scales linearly as the size of input increases. <br />
== Applications ==<br />
Computational Complexity is influential to numerous scientific fields including [https://optimization.cbe.cornell.edu/index.php?title=Quantum_computing_for_optimization quantum computing], game theory, data mining, and cellular automata.<ref>Robert A. Meyers. (2012). Computational Complexity: Theory, Techniques, and Applications. Springer: Springer. Retrieved from https://search.ebscohost.com/login.aspx?custid=s9001366&groupid=main&profid=pfi&authtype=ip,guest&direct=true&db=edspub&AN=edp1880523&site=eds-live&scope=site</ref> Focusing in on quantum computing, there are important applications to the study of computational complexity as the theory of complexity is largely based upon the Turing machine and the Church-Turing thesis that any physically realizable computation device can be simulated by a Turing machine. If quantum computers are to be physically realizable they could alter our understanding of how complex a decision problem may be by providing enhanced methods in which algorithms may be computed and potentially lowering the number of operations to be performed.<ref> Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235 </ref><br />
<br />
== Conclusion ==<br />
Computational complexity has important implications in the field of computer science and far reaching applications that span numerous fields and industries. As computable problems become more complex the ability to increase the efficiency in which they are solved becomes more important. Advancements toward solving P vs NP will have far reaching impacts on how we approach the computability of problems and the ability to efficiently allocate resources.<br />
<br />
== Sources ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Simplex_algorithm&diff=2722Simplex algorithm2020-12-21T11:28:03Z<p>Wc593: </p>
<hr />
<div>Author: Guoqing Hu (SysEn 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Simplex algorithm (or Simplex method) is a widely-used algorithm to solve the Linear Programming(LP) optimization problems. The simplex algorithm can be thought of as one of the elementary steps for solving the inequality problem, since many of those will be converted to LP and solved via Simplex algorithm.<ref name=":0">[http://www-personal.umich.edu/~murty/books/linear_complementarity_webbook/lcp-complete.pdf Linear complementarity, linear and nonlinear programming Internet Edition].</ref> Simplex algorithm has been proposed by [[Wikipedia: George_Dantzig|George Dantzig]], initiated from the idea of step by step downgrade to one of the vertices on the convex polyhedral.<ref>Dantzig, G. B. (1987, May). [https://apps.dtic.mil/dtic/tr/fulltext/u2/a182708.pdf Origins of the simplex method].</ref> "Simplex" could be possibly referred to as the top vertex on the simplicial cone which is the geometric illustration of the constraints within LP problems.<ref>Strang, G. (1987). Karmarkar’s algorithm and its place in applied mathematics. ''The Mathematical Intelligencer,'' ''9''(2), 4-10. doi:10.1007/bf03025891.</ref><br />
<br />
== Algorithmic Discussion ==<br />
There are two theorems in LP:<br />
<br />
# The feasible region for an LP problem is a convex set (Every linear equation's second derivative is 0, implying the monotonicity of the trend). Therefore, if an LP has an optimal solution, there must be an extreme point of the feasible region that is optimal<br />
# For an LP optimization problem, there is only one extreme point of the LP's feasible region regarding every basic feasible solution. Plus, there will be a minimum of one basic feasible solution corresponding to every extreme point in the feasible region.<ref name=":1">Vanderbei, R. J. (2000). ''Linear programming: Foundations and extensions''. Boston: Kluwer.</ref><br />
[[File:Geometric Illustration of LP problem.png|thumb|Geometric Illustration of LP problem]]<br />
Based on the two theorems above, the geometric illustration of the LP problem could be depicted. Each line of this polyhedral will be the boundary of the LP constraints, in which every vertex will be the extreme points according to the theorem. The simplex method is the way to adjust the nonbasic variables to travel to different vertex till the optimum solution is found.<ref>Sakarovitch M. (1983) Geometric Interpretation of the Simplex Method. In: Thomas J.B. (eds) Linear Programming. Springer Texts in Electrical Engineering. Springer, New York, NY. <nowiki>https://doi.org/10.1007/978-1-4757-4106-3_8</nowiki></ref><br />
<br />
Consider the following expression as the general linear programming problem standard form:<br />
<br />
<math>\max \sum_{i=1}^n c_ix_i</math><br />
<br />
With the following constraints:<br />
<br />
<math> \begin{align} s.t. \quad \sum_{j=1}^n a_{ij}x_j &\leq b_i \quad i = 1,2,...,m \\<br />
<br />
x_j &\geq 0 \quad j = 1,2,...,n \end{align} </math><br />
<br />
The first step of the simplex method is to add slack variables and symbols which represent the objective functions:<br />
<br />
<math> \begin{align} \phi &= \sum_{i=1}^n c_ix_i\\<br />
z_i &= b_i - \sum_{j=1}^n a_{ij}x_j \quad i = 1,2,...,m \end{align} </math><br />
<br />
The new introduced slack variables may be confused with the original values. Therefore, it will be convenient to add those slack variables <math> z_i </math> to the end of the list of ''x''-variables with the following expression:<br />
<br />
<math> \begin{align} \phi &= \sum_{i=1}^n c_ix_i\\<br />
x_{n+i} &= b_i - \sum_{j=1}^n a_{ij}x_{ij} \quad i=1,2,...,m \end{align} </math><br />
<br />
With the progression of simplex method, the starting dictionary (which is the equations above) switches between the dictionaries in seeking for optimal values. Every dictionary will have ''m'' basic variables which form the feasible area, as well as ''n'' non-basic variables which compose the objective function. Afterward, the dictionary function will be written in the form of:<br />
<br />
<math> \begin{align} <br />
\phi &= \bar{\phi} + \sum_{j=1}^n \bar{c_j}x_j\\<br />
x_{i} &= \bar{b_i} - \sum_{j=1}^n \bar{a_{ij}}x_{ij} \quad i=1,2,...,n+m <br />
\end{align} </math><br />
<br />
Where the variables with bar suggest that those corresponding values will change accordingly with the progression of the simplex method. The observation could be made that there will specifically one variable goes from non-basic to basic and another acts oppositely. This kind of variable is referred to as the ''entering variable''. Under the goal of increasing <math>\phi</math>, the entering variables are selected from the set {1,2,...,n}. As long as there are no repetitive entering variables can be selected, the optimal values will be found. The decision of which entering variable should be selected at first place should be made based on the consideration that there usually are multiple constraints (n>1). For the Simplex algorithm, the coefficient with the least value is preferred since the major objective is maximization. <br />
<br />
The ''leaving variables'' are defined as which go from basic to non-basic. The reason of their existence is to ensure the non-negativity of those basic variables. Once the entering variables are determined, the corresponding leaving variables will change accordingly from the equation below:<br />
<br />
<math> x_i = \bar{b_i} - \bar{a_{ik}}x_k \quad i \, \epsilon \, \{ 1,2,...,n+m \}</math><br />
<br />
Since the non-negativity of entering variables should be ensured, the following inequality can be derived:<br />
<br />
<math> \bar{b_i} - \bar{a_i}x_k \geq 0 \quad i\,\epsilon\, \{1,2,...,n+m \}</math><br />
<br />
Where <math>x_k</math> is immutable. The minimum <math>x_i</math> should be zero to get the minimum value since this cannot be negative. Therefore, the following equation should be derived:<br />
<br />
<math> x_k = \frac {\bar{b_i}}{\bar{a_{ik}}} </math><br />
<br />
Due to the nonnegativity of all variables, the value of <math>x_k</math> should be raised to the largest of all of those values calculated from above equation. Hence, the following equation can be derived:<br />
<br />
<math> x_k = \min_{\bar{a_{ik}}>0} \, \frac{\bar{b_i}}{\bar{a_{ik}}} \quad i=1,2,...,n+m</math><br />
<br />
Once the leaving-basic and entering-nonbasic variables are chosen, reasonable row operation should be conducted to switch from the current dictionary to the new dictionary, as this step is called ''pivot.''<ref name=":1" /><br />
<br />
As in the pivot process, the coefficient for the selected pivot element should be one, meaning the reciprocal of this coefficient should be multiplied to every element within this row. Afterward, multiplying this specific row with corresponding coefficients and adding this to different rows, one should get 0 values for all other entries in this pivot element's column.<br />
<br />
If there are any negative variables after the pivot process, one should continue finding the pivot element by repeating the process above. At once there are no more negative values for basic and non-basic variables. The optimal solution is found.<ref>Evar D. Nering and Albert W. Tucker, 1993, ''Linear Programs and Related Problems'', Academic Press. (elementary)</ref><ref>Robert J. Vanderbei, ''Linear Programming: Foundations and Extensions'', 3rd ed., International Series in Operations Research & Management Science, Vol. 114, Springer Verlag, 2008. <nowiki>ISBN 978-0-387-74387-5</nowiki>.</ref><br />
<br />
== Numerical Example ==<br />
Considering the following numerical example to gain better understanding:<br />
<br />
<math> \max{4x_1+x_2+4x_3}</math><br />
<br />
with the following constraints:<br />
<br />
<math> \begin{align} <br />
2x_1 + x_2 + x_3 &\leq 2 \\<br />
x_1 + 2x_2 +3x_3 &\leq 4\\<br />
2x_1 + 2x_2 + x_3 &\leq 8 \\<br />
x_1,x_2,x_3 &\geq 0<br />
\end{align}</math><br />
<br />
With adding slack variables to get the following equations:<br />
<br />
<math> \begin{align}<br />
z - 4x_1 - x_2 -4x_3 &= 0 \\<br />
2x_1 + x_2 + x_3 + s_1 &= 2 \\<br />
x_1 + 2x_2 + 3x_3 + s_2 &= 4\\<br />
2x_1 + 2x_2 + x_3 + s_3 &= 8 \\<br />
x_1,x_2,x_3,s_1,s_2,s_3 &\geq 0 \end{align} </math><br />
<br />
<br />
The simplex tableau can be derived as following:<br />
<br />
<math><br />
\begin{array}{c c c c c c c | r} <br />
x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & z & b \\<br />
\hline<br />
2 & 1 & 1 & 1 & 0 & 0 & 0 & 2 \\<br />
1 & 2 & 3 & 0 & 1 & 0 & 0 & 4 \\<br />
2 & 2 & 1 & 0 & 0 & 1 & 0 & 8 \\<br />
\hline<br />
-4 & -1 & -4 & 0 & 0 & 0 & 1 & 0<br />
\end{array} </math><br />
<br />
In the last row, the column with the smallest value should be selected. Although there are two smallest values, the result will be the same no matter of which one is selected first. For this solution, the first column is selected. After the least coefficient is found, the pivot process will be conducted by searching for the coefficient <math> \frac{b_i}{x_1} </math>. Since the coefficient in the first row is 1 and 4 for the second row, the first row should be pivoted. And following tableau can be created:<br />
<br />
<math><br />
\begin{array}{c c c c c c c | r} <br />
x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & z & b \\<br />
\hline<br />
1 & 0.5 & 0.5 & 0.5 & 0 & 0 & 0 & 1 \\<br />
1 & 2 & 3 & 0 & 1 & 0 & 0 & 4 \\<br />
2 & 2 & 1 & 0 & 0 & 1 & 0 & 8 \\<br />
\hline<br />
-4 & -1 & -4 & 0 & 0 & 0 & 1 & 0<br />
\end{array} </math><br />
<br />
By performing the row operation still every other rows (other than first row) in column 1 are zeroes:<br />
<br />
<math><br />
\begin{array}{c c c c c c c | r} <br />
x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & z & b \\<br />
\hline<br />
1 & 0.5 & 0.5 & 0.5 & 0 & 0 & 0 & 1 \\<br />
0 & 1.5 & 2.5 & -0.5 & 1 & 0 & 0 & 3 \\<br />
0 & 1 & 0 & -1 & 0 & 1 & 0 & 6 \\<br />
\hline<br />
0 & 1 & -2 & 2 & 0 & 0 & 1 & 4<br />
\end{array} </math><br />
<br />
Because there is one negative value in last row, the same processes should be performed again. The smallest value in the last row is in the third column. And in the third column, the second row has the smallest coefficients of <math> \frac{b_i}{x_3}</math> which is 1.2. Thus, the second row will be selected for pivoting. The simplex tableau is the following:<br />
<br />
<math><br />
\begin{array}{c c c c c c c | r} <br />
x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & z & b \\<br />
\hline<br />
1 & 0.5 & 0.5 & 0.5 & 0 & 0 & 0 & 1 \\<br />
0 & 0.6 & 1 & -0.2 & 0.4 & 0 & 0 & 1.2 \\<br />
0 & 1 & 0 & -1 & 0 & 1 & 0 & 6 \\<br />
\hline<br />
0 & 1 & -2 & 2 & 0 & 0 & 1 & 4<br />
\end{array} </math><br />
<br />
By performing the row operation to make other columns 0's, the following could be derived<br />
<br />
<math><br />
\begin{array}{c c c c c c c | r} <br />
x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & z & b \\<br />
\hline<br />
1 & 0.2 & 0 & 0.6 & -0.2 & 0 & 0 & 0.4 \\<br />
0 & 0.6 & 1 & -0.2 & 0.4 & 0 & 0 & 1.2 \\<br />
0 & -0.1 & 0 & 0.2 & 0.6 & -1 & 0 & -4.2 \\<br />
\hline<br />
0 & 2.2 & 0 & 1.6 & 0.8 & 0 & 1 & 6.4<br />
\end{array} </math><br />
<br />
There is no need to further conduct calculation since all values in the last row are non-negative. From the tableau above, <math>x_1</math>, <math> x_3</math> and <math>z</math> are basic variables since all rows in their columns are 0's except one row is 1.Therefore, the optimal solution will be <math>x_1 = 0.4</math>, <math>x_2 = 0</math>, <math>x_3 = 1.2</math>, achieving the maximum value: <math>z = 6.4</math><br />
<br />
== Application ==<br />
The simplex method can be used in many programming problems since those will be converted to LP (Linear Programming) and solved by the simplex method. Besides the mathematical application, much other industrial planning will use this method to maximize the profits or minimize the resources needed. <br />
<br />
=== Mathematical Problem ===<br />
The simplex method is commonly used in many programming problems. Due to the heavy load of computation on the non-linear problem, many non-linear programming(NLP) problems cannot be solved effectively. Consequently, many NLP will rely on the LP solver, namely the simplex method, to do some of the work in finding the solution (for instance, the upper or lower bound of the feasible solution), or in many cases, those NLP will be wholly linearized to LP and solved from the simplex method.<ref name=":0" /> Other than solving the problems, simplex method can also be used reliably to support the LP's solution from other theorem, for instance the [[wikipedia:Farkas'_lemma#:~:text=Farkas'%20lemma%20is%20a%20solvability,the%20Hungarian%20mathematician%20Gyula%20Farkas.&text=Farkas'%20lemma%20belongs%20to%20a,two%20systems%20has%20a%20solution.|Farkas' theorem]] in which Simplex method proves the suggested feasible solutions.<sup>[1]</sup> Besides solving the problems, the Simplex method can also enlighten the scholars with the ways of solving other problems, for instance, Quadratic Programming (QP).<ref>Wolfe, P. (1959). The simplex method for quadratic programming. ''Econometrica,'' ''27''(3), 382. doi:10.2307/1909468</ref> For some QP problems, they have linear constraints to the variables which can be solved analogous to the idea of the Simplex method. <br />
=== Industrial Application ===<br />
The industries from different fields will use the simplex method to plan under the constraints. With considering that it is usually the case that the constraints or tradeoffs and desired outcomes are linearly related to the controllable variables, many people will develop the models to solve the LP problem via the simplex method, for instance, the agricultural and economic problems <br />
<br />
Farmers usually need to rationally allocate the existed resources to obtain the maximum profits. The potential constraints are raised from multiple perspectives including policy restriction, budget concerns as well as farmland area. Farmers may incline to use the simplex-method-based model to have a better plan, as those constraints may be constant in many scenarios and the profits are usually linearly related to the farm production, thereby forming the LP problem. Currently, there is an existing plant-model that can accept inputs such as price, farm production, and return the optimal plan to maximize the profits with given information.<ref>Hua, W. (1998). [https://shareok.org/bitstream/handle/11244/12005/Thesis-1998-H8735a.pdf?sequence=1 Application of the revised simplex method to the farm planning model].</ref> <br />
<br />
Besides agricultural purposes, the Simplex method can also be used by enterprises to make profits. The rational sale-strategy will be indispensable to the successful practice of marketing. Since there are so many enterprises international wide, the marketing strategy from enamelware is selected for illustration. After widely collecting the data of the quality of varied products manufactured, cost of each and popularity among the customers, the company may need to determine which kind of products well worth the investment and continue making profits as well as which won't. Considering the cost and profit factors are linearly dependent on the production, economists will suggest an LP model that can be solved via the simplex method.<ref>Nikitenko, A. V. (1996). Economic analysis of the potential use of a simplex method in designing the sales strategy of an enamelware enterprise. ''Glass and Ceramics,'' ''53''(12), 367-369. doi:10.1007/bf01129674.</ref><br />
<br />
The above professional fields are only the tips of the iceberg to the simplex method application. Many other fields will use this method since the LP problem is gaining popularity in recent days and the simplex method plays a crucial role in solving those problems.<br />
<br />
== Conclusion ==<br />
It is indisputable to acknowledge the influence of the Simplex method to programming, as this method won the 'National Medal of Science' to its inventor, George Dantzig.<ref>Cottle, R., Johnson, E. and Wets, R. (2007). George B. Dantzig (1914–2005). ''Notices Amer. Math. Soc.'' 54, 344–362.</ref> Not only for its wide usage in the mathematic models and industrial manufacture, but the Simplex method also provides a new perspective in solving the inequality problems. As its contribution to the programming substantially boosts the advancement of the current technology and economy from making the optimal plan with the constraints. Nowadays, with the development of technology and economics, the Simplex method is substituted with some more advanced solvers which can solve the problems with faster speed and handle a larger amount of constraints and variables, but this innovative method marks the creativity at that age and continuously offer the inspiration to the upcoming challenges. <br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Duality&diff=2721Duality2020-12-21T11:27:37Z<p>Wc593: </p>
<hr />
<div>Author: Claire Gauthier, Trent Melsheimer, Alexa Piper, Nicholas Chung, Michael Kulbacki (SysEn 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Every optimization problem may be viewed either from the primal or the dual, this is the principle of '''duality'''. Duality develops the relationships between one optimization problem and another related optimization problem. If the primal optimization problem is a maximization problem, the dual can be used to find upper bounds on its optimal value. If the primal problem is a minimization problem, the dual can be used to find the lower bounds. <br />
<br />
According to the American mathematical scientist George Dantzig, Duality for Linear Optimization was conjectured by Jon von Neumann after Dantzig presented a problem for Linear Optimization. Von Neumann determined that two person zero sum matrix game (from Game Theory) was equivalent to Linear Programming. Proofs of the Duality theory were published by Canadian Mathematician Albert W. Tucker and his group in 1948. <ref name=":0"> Duality (Optimization). (2020, July 12). ''In Wikipedia. ''https://en.wikipedia.org/wiki/Duality_(optimization)#:~:text=In%20mathematical%20optimization%20theory%2C%20duality,the%20primal%20(minimization)%20problem.</ref><br />
<br />
== Theory, methodology, and/or algorithmic discussions ==<br />
<br />
=== Definition ===<br />
<br />
'''Primal'''<blockquote>Maximize <math>z=\textstyle \sum_{j=1}^n \displaystyle c_j x_j</math> </blockquote><blockquote>subject to:<br />
<br />
</blockquote><blockquote><blockquote><math>\textstyle \sum_{j=1}^n \displaystyle a_{i,j} x_j\lneq b_i \qquad (i=1, 2, ... ,m) </math></blockquote></blockquote><blockquote><blockquote><math>x_j\gneq 0 \qquad (j=1, 2, ... ,n) </math></blockquote></blockquote><blockquote><br />
<br />
<br />
</blockquote>'''Dual'''<blockquote><br />
Minimize <math>v=\textstyle \sum_{i=1}^m \displaystyle b_i y_i</math><br />
<br />
subject to:<blockquote><math>\textstyle \sum_{i=1}^m \displaystyle y_ia_{i,j}\gneq c_j \qquad (j=1, 2, ... , n) </math></blockquote><blockquote><math>y_j\gneq 0 \qquad (i=1, 2, ... , m)</math></blockquote></blockquote>Between the primal and the dual, the variables <math>c</math> and <math>b</math> switch places with each other. The coefficient (<math>c_j</math>) of the primal becomes the right-hand side (RHS) of the dual. The RHS of the primal (<math>b_j</math>) becomes the coefficient of the dual. The less than or equal to constraints in the primal become greater than or equal to in the dual problem. <ref name=":1"> Ferguson, Thomas S. ''A Concise Introduction to Linear Programming.'' University of California Los Angeles. https://www.math.ucla.edu/~tom/LP.pdf </ref><br />
<br />
=== Constructing a Dual ===<br />
<math>\begin{matrix} \max(c^Tx) \\ \ s.t. Ax\leq b \\ x \geq 0 \end{matrix}</math> <math> \quad \longrightarrow \quad</math><math>\begin{matrix} \min(b^Ty) \\ \ s.t. A^Tx\geq c \\ y \geq 0 \end{matrix}</math><br />
<br />
=== Duality Properties ===<br />
The following duality properties hold if the primal problem is a maximization problem as considered above. This especially holds for weak duality.<br />
<br />
==== Weak Duality ====<br />
<br />
* Let <math>x=[x_1, ... , x_n] </math> be any feasible solution to the primal<br />
* Let <math>y = [y_1, ... , y_m] </math>be any feasible solution to the dual<br />
* <math>\therefore </math>(z value for x) <math>\leq </math>(v value for y)<br />
<br />
The weak duality theorem says that the z value for x in the primal is always less than or equal to the v value of y in the dual. <br />
<br />
The difference between (v value for y) and (z value for x) is called the optimal ''duality gap'', which is always nonnegative. <ref> Bradley, Hax, and Magnanti. (1977). ''Applied Mathematical Programming.'' Addison-Wesley. http://web.mit.edu/15.053/www/AMP-Chapter-04.pdf </ref><br />
<br />
==== Strong Duality Lemma ====<br />
<br />
* Let <math>x=[x_1, ... , x_n] </math> be any feasible solution to the primal<br />
* Let <math>y = [y_1, ... , y_m] </math>be any feasible solution to the dual<br />
* If (z value for x) <math>= </math> (v value for y), then '''x''' is optimal for the primal and '''y''' is optimal for the dual<br />
<br />
'''Graphical Explanation'''<br />
<br />
Essentially, as you choose values of x or y that come closer to the optimal solution, the value of z for the primal, and v for the dual will converge towards the optimal solution. On a number line, the value of z which is being maximized will approach from the left side of the optimum value while the value of v which is being minimized will approach from the right-hand side. <br />
[[File:Duality numberline .png|thumb| '''Figure 1: Graphical Representation of Duality''']]<br />
<br />
* If the primal is unbounded, then the dual is infeasible<br />
* If the dual is unbounded, then the primal is infeasible<br />
<br />
==== Strong Duality Theorem ====<br />
If the primal solution has an optimal solution <math>x^*</math> then the dual problem has an optimal solution <math>y^*</math> such that<br />
<br />
<math>\textstyle \sum_{j=1}^n \displaystyle c_j x_j^* = \textstyle \sum_{i=1}^m \displaystyle b_i y_i^*</math><br />
<br />
Dual problems and their solutions are used in connection with the following optimization topics:<br />
<br />
'''Karush-Kuhn-Tucker (KKT) Variables'''<br />
<br />
* The optimal solution to the dual problem is a vector of the KKT multipliers. Consider we have a convex optimization problem where <math>f(x), g_1(x),...,g_m(x) </math> are convex differentiable functions. Suppose the pair <math>(\bar{x},\bar{u}) </math> is a saddlepoint of the Lagrangian and that <math>\bar{x} </math> together with <math>\bar{u} </math> satisfy the KKT conditions. The optimal solutions of this optimization problem are then <math>\bar{x} </math> and <math>\bar{u} </math> with no duality gap. <ref> ''KKT Conditions and Duality.'' (2018, February 18). Dartmouth College. https://math.dartmouth.edu/~m126w18/pdf/part4.pdf </ref><br />
* To have strong duality as described above, you must meet the KKT conditions. <br />
* <br />
<br />
'''Dual Simplex Method''' <br />
<br />
* Solving a Linear Programming problem by the Simplex Method gives you a solution of its dual as a by-product. This simplex algorithm tries to reduce the infeasibility of the dual problem. The dual simplex method can be thought of as a disguised simplex method working on the dual. The dual simplex method is when we maintain dual feasibility by imposing the condition that the objective function includes every variable with a nonpositive coefficient, and terminating when the primal feasibility conditions are satisfied. <ref name=":3"> Chvatal, Vasek. (1977). ''The Dual Simplex Method.'' W.H. Freeman and Co. http://cgm.cs.mcgill.ca/~avis/courses/567/notes/ch10.pdf </ref><br />
<br />
=== Duality Interpretation ===<br />
<br />
* Duality can be leveraged in a multitude of interpretations. The following example includes an economic optimization problem that leverages duality: <br />
<br />
'''Economic Interpretation Example''' <br />
<br />
* A rancher is preparing for a flea market sale in which he intends to sell three types of clothing that are all comprised of wool from his sheep: peacoats, hats, and scarves. With locals socializing the high quality of his clothing, the rancher plans to sell out of all of his products each time he opens up a store at the flea market. The following shows the rancher's materials, time, and profits received for his peacoats, hats, and scarves, respectively.<br />
{| class="wikitable"<br />
|+<br />
!Clothing<br />
!Wool (ft^2)<br />
!Sewing Material (in)<br />
!Production Time (hrs)<br />
!Profit ($)<br />
|-<br />
|Peacoat<br />
|12<br />
|80<br />
|7<br />
|175<br />
|-<br />
|Hat<br />
|2<br />
|40<br />
|3<br />
|25<br />
|-<br />
|Scarf<br />
|4<br />
|20<br />
|1<br />
|21<br />
|}<br />
* With limited materials and time for an upcoming flea market event in which the rancher will once again sell his products, the rancher needs to determine how he can make best use of his time and materials to ultimately maximize his profits. The rancher is running lower than usual on wool supply; he only has 50 square feet of wool sheets to be made for his clothing this week. Furthermore, the rancher only has 460 inches of sewing materials left. Lastly, with the rancher has a limited time of 25 hours to produce his clothing line.<br />
* <br />
* <br />
* With the above information the rancher creates the following linear program:<br />
<br />
<br />
'''maximize''' <math>z=175x_1+25x_2+21x_3</math><br />
<br />
'''subject to:'''<br />
<br />
<math>12x_1+2x_2+3x_3\leq 50</math><br />
<br />
<math>80x_1+40x_2+20x_3\leq 460</math><br />
<br />
<math>7x_1+3x_2+1x_3\leq 25</math><br />
<br />
<math>x_1,x_2,x_3\geq 0</math><br />
<br />
* Before the rancher finds the optimal number of peacoats, hats, and scarves to produce, a local clothing store owner approaches him and asks if she can purchase his labor and materials for her store. Unsure of what is a fair purchase price to ask for these services, the clothing store owner decides to create a dual of the original primal:<br />
<br />
<br />
'''minimize''' <math>v=50y_1+460y_2+25y_3</math><br />
<br />
'''subject to:'''<br />
<br />
<math>12y_1+80y_2+7y_3\geq 175</math><br />
<br />
<math>2y_1+40y_2+3y_3\geq 25</math><br />
<br />
<math>3y_1+20y_2+1y_3\geq 21</math><br />
<br />
<math>y_1,y_2,y_3\geq 0</math><br />
<br />
* By leveraging the above dual, the clothing store owner is able to determine the asking price for the rancher's materials and labor. In the dual, the clothing store owner's objective is now to minimize the asking price, where <math>y_1</math> represents the the amount of wool, <math>y_2</math> represents the amount of sewing material, and <math>y_3</math> represents the rancher's labor.<br />
* <br />
<br />
== Numerical Example ==<br />
<br />
=== Construct the Dual for the following maximization problem: ===<br />
'''maximize''' <math>z=6x_1+14x_2+13x_3</math><br />
<br />
'''subject to:'''<br />
<br />
<math>\tfrac{1}{2}x_1+2x_2+x_3\leq 24</math><br />
<br />
<math>x_1+2x_2+4x_3\leq 60</math><br />
<br />
<math>3x_1+5x_3\leq 12</math><br />
<br />
For the problem above, form augmented matrix A. The first two rows represent constraints one and two respectively. The last row represents the objective function. <br />
<br />
<math>A =\begin{bmatrix} \tfrac{1}{2} & 2 & 1\\ 1 & 2 & 4 \\ 3 & 0 & 5 \end{bmatrix}</math><br />
<br />
Find the transpose of matrix A<br />
<br />
<math>A^T=\begin{bmatrix} \tfrac{1}{2} & 1 & 3 \\ 2 & 2 & 0 \\ 1 & 4 & 5 \end{bmatrix}</math><br />
<br />
From the last row of the transpose of matrix A, we can derive the objective function of the dual. Each of the preceding rows represents a constraint. Note that the original maximization problem had three variables and two constraints. The dual problem has two variables and three constraints.<br />
<br />
'''minimize''' <math>v=24y_1+60y_2+12y_3<br />
</math><br />
<br />
'''subject to:'''<br />
<br />
<math>\tfrac{1}{2}y_1+y_2+3y_3 \geq 6</math><br />
<br />
<math>2y_1+2y_2\geq 14</math><br />
<br />
<math>y_1+4y_2+5y_3\geq 13</math><br />
<br />
== Applications ==<br />
Duality appears in many linear and nonlinear optimization models. In many of these applications we can solve the dual in cases when solving the primal is more difficult. If for example, there are more constraints than there are variables ''(m >> n)'', it may be easier to solve the dual. A few of these applications are presented and described in more detail below. <ref name=":2"> R.J. Vanderbei. (2008). ''Linear Programming: Foundations and Extensions.'' Springer. http://link.springer.com/book/10.1007/978-0-387-74388-2. </ref><br />
<br />
'''Economics'''<br />
<br />
* When calculating optimal product to yield the highest profit, duality can be used. For instance, the primal could be to maximize the profit, but by taking the dual the problem can be reframed into minimizing the cost. By transitioning the problem to set the raw material prices one can determine the price that the owner is willing to accept for the raw material. These dual variables are related to the values of resources available, and are often referred to as resource shadow prices. <ref> Alaouze, C.M. (1996). ''Shadow Prices in Linear Programming Problems.'' New South Wales - School of Economics. https://ideas.repec.org/p/fth/nesowa/96-18.html#:~:text=In%20linear%20programming%20problems%20the,is%20increased%20by%20one%20unit. </ref><br />
<br />
'''Structural Design'''<br />
<br />
* An example of this is in a structural design model, the tension on the beams are the primal variables, and the displacements on the nodes are the dual variables. <ref> Freund, Robert M. (2004, February 10). ''Applied Language Duality for Constrained Optimization.'' Massachusetts Institute of Technology. https://ocw.mit.edu/courses/sloan-school-of-management/15-094j-systems-optimization-models-and-computation-sma-5223-spring-2004/lecture-notes/duality_article.pdf </ref><br />
<br />
'''Electrical Networks'''<br />
<br />
*When modeling electrical networks the current flows can be modeled as the primal variables, and the voltage differences are the dual variables. <ref> Freund, Robert M. (2004, March). ''Duality Theory of Constrained Optimization.'' Massachusetts Institute of Technology. https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec18_duality_thy.pdf </ref><br />
<br />
'''Game Theory'''<br />
<br />
* Duality theory is closely related to game theory. Game theory is an approach used to deal with multi-person decision problems. The game is the decision-making problem, and the players are the decision-makers. Each player chooses a strategy or an action to be taken. Each player will then receive a payoff when each player has selected a strategy. The zero sum game that Von Neumann conjectured was the same as linear programming, is when the gain of one player results in the loss of another. This general situation of a zero sum game has similar characteristics to duality. <ref> Stolee, Derrick. (2013). ''Game Theory and Duality.'' University of Illinois at Urbana-Champaigna. https://faculty.math.illinois.edu/~stolee/Teaching/13-482/gametheory.pdf </ref><br />
<br />
'''Support Vector Machines''' <br />
<br />
* Support Vector Machines (SVM) is a popular machine learning algorithm for classification. The concept of SVM can be broken down into three parts, the first two being Linear SVM and the last being Non-Linear SVM. There are many other concepts to SVM including hyperplanes, functional and geometric margins, and quadratic programming <ref> Jana, Abhisek. (2020, April). ''Support Vector Machines for Beginners - Linear SVM.'' http://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-linear-svm/ </ref>. In relation to Duality, the primal problem is helpful in solving Linear SVM, but in order to get to the goal of solving Non-Linear SVM, the primal problem is not useful. This is where we need Duality to look at the dual problem to solve the Non-Linear SVM <ref> Jana, Abhisek. (2020, April 5). ''Support Vector Machines for Beginners - Duality Problem.'' https://www.adeveloperdiary.com/data-science/machine-learning/support-vector-machines-for-beginners-duality-problem/. </ref>.<br />
<br />
== Conclusion ==<br />
Since proofs of Duality theory were published in 1948 <ref name=":0" /> duality has been such an important technique in solving linear and nonlinear optimization problems. This theory provides the idea that the dual of a standard maximum problem is defined to be the standard minimum problem <ref name=":1" />. This technique allows for every feasible solution for one side of the optimization problem to give a bound on the optimal objective function value for the other <ref name=":2" />. This technique can be applied to situations such as solving for economic constraints, resource allocation, game theory and bounding optimization problems. By developing an understanding of the dual of a linear program one can gain many important insights on nearly any algorithm or optimization of data.<br />
<br />
== References ==</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27202020 Cornell Optimization Open Textbook Feedback2020-12-21T11:12:01Z<p>Wc593: /* Network flow problem */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Numerical Example and Solution<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
* Reference<br />
*# Many references listed here are not used in any of the text in the Wiki. Please link them appropriately.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27192020 Cornell Optimization Open Textbook Feedback2020-12-21T11:08:59Z<p>Wc593: /* Adam */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
* Reference<br />
*# Many references listed here are not used in any of the text in the Wiki. Please link them appropriately.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Adam&diff=2718Adam2020-12-21T11:08:31Z<p>Wc593: </p>
<hr />
<div>Authors: Nicholas Kincaid (CHEME 6800 Fall 2020)<br><br />
Steward: Fengqi You<br />
<br />
== Introduction ==<br />
Adam <ref name="adam"> Kingma, Diederik P., and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015, pp. 1–15.</ref> is a variant of gradient descent that has become widely popular in the machine learning community. Presented in 2015, the Adam algorithm is often recommended as the default algorithm for training neural networks as it has shown improved performance over other variants of gradient descent algorithms for a wide range of problems. Adam's name is derived from adaptive moment estimation because uses estimates of the first and second moments of the gradient to perform updates, which can be seen as incorporating gradient descent with momentum (the first-order moment) and [https://optimization.cbe.cornell.edu/index.php?title=RMSProp RMSProp] algorithm<ref>Tieleman, Tijmen, and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, 2012.</ref> (the second-order moment).<br />
<br />
== Background ==<br />
=== Batch Gradient Descent ===<br />
In standard batch gradient descent, the parameters, <math>\theta</math>, of the objective function <math>f(\theta)</math>, are updated based on the gradient of <math>f</math> with respect to <br />
<math>\theta</math> for the entire training dataset, as<br />
<br />
<math> g_t =\nabla_{\theta_{t-1}} f \big(\theta_{t-1} \big) </math> <br/><br />
<math> \theta_t = \theta_{t-1} - \alpha g_t , </math> <br/><br />
<br />
where <math>\alpha</math> is defined as the learning rate and is a hyper-parameter of the optimization algorithm, and <math>t</math> is the iteration number. Key challenges of the standard gradient descent method are the tendency to get stuck in local minima and/or saddle points of the objective function, as well as choosing a proper learning rate, <math>\alpha</math>, which can lead to poor convergence.<ref>Ruder, Sebastian. An Overview of Gradient Descent Optimization Algorithms, 2016, pp. 1–14, http://arxiv.org/abs/1609.04747.</ref><br />
<br />
=== Stochastic Gradient Descent ===<br />
Another variant of gradient descent is [https://optimization.cbe.cornell.edu/index.php?title=Stochastic_gradient_descent stochastic gradient descent (SGD)], the gradient is computed and parameters are updated as in equation 1, but for each training sample in the training set. <br />
=== Mini-Batch Gradient Descent ===<br />
In between batch gradient descent and stochastic gradient descent, mini-batch gradient descent computes parameters updates on the gradient computed from a subset of the training set, where the size of the subset is often referred to as the batch size.<br />
<br />
== Adam Algorithm ==<br />
The Adam algorithm first computes the gradient, <math>g_t</math> of the objective function with respect to the parameters <math>\theta</math>, but then computes and stores first and second order moments of the gradient, <math>m_t</math> and <math>v_t</math><br />
respectively, as<br />
<br />
<math> m_t = \beta_1 \cdot m_{t-1} + (1-\beta_1) \cdot g_t </math> <br/><br />
<math> v_t = \beta_2 \cdot v_{t-1} + (1-\beta_2) \cdot g_t^2, </math> <br/><br />
<br />
where <math>\beta_1</math> and <math>\beta_2</math> are hyper-parameters that are <math>\in [0,1]</math>. These parameters can seen as exponential decay rates of the estimated moments, as the previous value is successively multiplied by the value less than 1 in each iteration. The authors of the original paper suggest values <math>\beta_1 = 0.9</math> and <math>\beta_2 = 0.999</math>. In the current notation, the first iteration of the algorithm is at <math>t=1</math> and both, <math>m_0</math> and <math>v_0</math> are initialized to zero. Since both moments are initialized to zero, at early time steps, these values are biased towards zero. To counter this, the authors proposed a corrected update to <math>m_t</math> and <math>v_t</math> as<br />
<br />
<math> \hat{m}_t = m_t / (1-\beta_1 ^t) </math> <br/><br />
<math> \hat{v}_t = v_t / (1-\beta_2 ^t). </math> <br/><br />
Finally, the parameter update is computed as<br />
<br />
<math> \theta_t = \theta_{t-1} - \alpha \cdot \hat{m}_t / (\sqrt{\hat{v}_t} + \epsilon), </math> <br/><br />
<br />
where <math>\epsilon</math> is a small constant for stability. The authors recommend a value of <math>\epsilon=10^{-8}</math>. <br />
<br />
== Numerical Example ==<br />
<br />
[[File:Contour.png|thumb|Contour plot of the loss function showing the trajectory of Adam algorithm from the initial point]]<br />
<br />
[[File:Model fit .png|thumb|Plot showing original data points and resulting model fit from the Adam algorithm]]<br />
<br />
<br />
To illustrate how updates occur in the Adam algorithm, consider a linear, least-squares regression problem formulation. The table below shows a sample data-set of student exam grades and the number of hours spent studying for the exam. The goal of this example will be to generate a linear model to predict exam grades as a function of time spent studying.<br />
<br />
{| class="wikitable"<br />
|-<br />
| Hours Studying || 9.0 || 4.9 || 1.6 || 1.9 || 7.9 || 2.0 || 11.5 || 3.9 || 1.1 || 1.6 || 5.1 || 8.2 || 7.3 || 10.4 || 11.2<br />
|-<br />
| Exam Grad || 88.0 || 72.3 || 66.5 || 65.1 || 79.5 || 60.8 || 94.3, || 66.7 || 65.4 || 63.8 || 68.4 || 82.5 || 75.9 || 87.8 || 85.2<br />
|}<br />
<br />
The hypothesized model function will be<br />
<br />
<math>f_\theta(x) = \theta_0 + \theta_1 x.</math><br />
<br />
The cost function is defined as<br />
<br />
<math> J({\theta}) = \frac{1}{2}\sum_i^n \big(f_\theta(x_i) - y_i \big)^2, </math><br />
<br />
Where the <math>1/2</math> coefficient is used only to make the derivatives cleaner. The optimization problem can then be formulated as trying to find the values of <math>\theta</math> that minimize the squared residuals of <math>f_\theta(x)</math> and <math>y</math>. <br />
<br />
<math> \mathrm{argmin}_{\theta} \quad \frac{1}{n}\sum_{i}^n \big(f_\theta(x_i) - y_i \big) ^2 </math><br />
<br />
For simplicity, parameters will be updated after every data point i.e. a batch size of 1. For a single data point the derivatives of the cost function with respect to <math>\theta_0</math> and <math>\theta_1</math> are<br />
<br />
<math> \frac{\partial J(\theta)}{\partial \theta_0} = \big(f_\theta(x) - y \big) </math><br/><br />
<math> \frac{\partial J(\theta)}{\partial \theta_1} = \big(f_\theta(x) - y \big) x </math><br />
<br />
The initial values of <math>{\theta}</math> will be set to [50, 1] and The learning rate, <math>\alpha</math>, is set to 0.1 and the suggested parameters for <math>\beta_1</math>, <math>\beta_2</math>, and <math>\epsilon</math> are used. With the first data sample of <math> (x,y)=[8.98, 88.01]</math>, the computed gradients are<br />
<br />
<math> \frac{\partial J(\theta)}{\partial \theta_0} = \big((50 + 1\cdot 9 - 88.01 \big) = -29.0 </math><br/><br />
<math> \frac{\partial J(\theta)}{\partial \theta_1} = \big((50 + 1\cdot 9 - 88.01 \big)\cdot 9.0 = -261 </math><br/><br />
<br />
With <math>m_0</math> and <math>v_0</math> being initialized to zero, the calculations of <math>m_1</math> and <math>v_1</math> are<br />
<br />
<math> m_1 = 0.9 \cdot 0 + (1-0.9) \cdot \begin{bmatrix} -29\\ -261 \end{bmatrix} = \begin{bmatrix} -2.9\\ -26.1\end{bmatrix} </math> <br/><br />
<math> v_1 = 0.999\cdot 0 + (1-0.999) \cdot \begin{bmatrix} -29^2\\-261^2 \end{bmatrix} = \begin{bmatrix} 0.84\\ 68.2\end{bmatrix} , </math> <br/><br />
<br />
The bias-corrected terms are computed as<br />
<br />
<math> \hat{m}_1 = \begin{bmatrix} -2.9\\ -26.1\end{bmatrix} \frac{1}{ (1-0.9^1)} = \begin{bmatrix} -29.0\\-261.1\end{bmatrix}</math> <br/><br />
<math> \hat{v}_1 = \begin{bmatrix} 0.84\\ 68.2\end{bmatrix} \frac{1} {(1-0.999^1)} = \begin{bmatrix} 851.5\\68168\end{bmatrix}. </math> <br/><br />
<br />
Finally, the parameter update is<br />
<br />
<math> \theta_0 = 50 - 0.1 \cdot -29 / (\sqrt{851.5} + 10^{-8}) = 50.1 </math> <br/><br />
<math> \theta_1 = 1 - 0.1 \cdot -261 / (\sqrt{68168} + 10^{-8}) = 1.1 </math> <br/><br />
<br />
This procedure is repeated until the parameters have converged, giving <math>\theta</math> values of <math>[58.98, 2.72]</math>. The figures to the right show the trajectory of the Adam algorithm over a contour plot of the objective function and the resulting model fit. It should be noted that the stochastic gradient descent algorithm with a learning rate of 0.1 diverges and with a rate of 0.01, SGD oscillates around the global minimum due to the large magnitudes of the gradient in the <math>\theta_1</math> direction.<br />
<br />
<br />
== Applications ==<br />
[[File:Adam training.png|thumb|Comparison of training a multilayer neural network on MNIST images for different gradient descent algorithms published in the original Adam paper (Kingma, 2015)<ref name="adam" />.]]<br />
<br />
The Adam optimization algorithm has been widely used in machine learning applications to train model parameters. When used with backpropagation, the Adam algorithm has been shown to be a very robust and efficient method for training artificial neural networks and is capable of working well with a variety of structures and applications. In their original paper, the authors present three different training examples, logistic regression, multi-layer neural networks for classification of MNIST images, and a convolutional neural network (CNN). The training results from the original Adam paper showing the objective function cost vs. the iteration over the entire data set for the multi-layer neural network is shown to the right.<br />
<br />
== Variants of Adam ==<br />
=== AdaMax ===<br />
AdaMax<ref name="adam" /> is a variant of the Adam algorithm proposed in the original Adam paper that uses an exponentially weighted infinity norm instead of the second-order moment estimate. The weighted infinity norm updated <math>u_t</math>, is computed as<br />
<br />
<math> u_t = \max(\beta_2 \cdot u_{t-1}, |g_t|). </math><br />
<br />
The parameter update then becomes<br />
<br />
<math> \theta_t = \theta_{t-1} - (\alpha / (1-\beta_1^t)) \cdot m_t / u_t. </math><br />
<br />
=== Nadam ===<br />
The Nadam algorithm<ref>Dozat, Timothy. Incorporating Nesterov Momentum into Adam. ICLR Workshop, no. 1, 2016, pp. 2013–16. </ref> was proposed in 2016 and incorporates the Nesterov Accelerate Gradient (NAG)<ref>Nesterov, Yuri. A method of solving a convex programming problem with convergence rate O(1/k^2). In Soviet Mathematics Doklady, 1983, pp. 372-376.</ref>, a popular momentum like SGD variation, into the first-order moment term. <br />
<br />
== Conclusion ==<br />
Adam is a variant of the gradient descent algorithm that has been widely adopted in the machine learning community. Adam can be seen as the combination of two other variants of gradient descent, SGD with momentum and RMSProp. Adam uses estimations of the first and second-order moments of the gradient to adapt the parameter update. These moment estimations are computed via moving averages,<math>m_t</math> and <math>v_t</math>, of the gradient and the squared gradient respectfully. In a variety of neural network training applications, Adam has shown increased convergence and robustness over other gradient descent algorithms and is often recommended as the default optimizer for training.<ref> "Neural Networks Part 3: Learning and Evaluation," CS231n: Convolutional Neural Networks for Visual Recognition, Stanford Unversity, 2020</ref><br />
<br />
== References ==<br />
<references/></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27172020 Cornell Optimization Open Textbook Feedback2020-12-21T11:06:21Z<p>Wc593: /* RMSProp */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
* Reference<br />
*# Many references listed here are not used in any of the text in the Wiki. Please link them appropriately.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=RMSProp&diff=2716RMSProp2020-12-21T11:05:24Z<p>Wc593: </p>
<hr />
<div>Author: Jason Huang (SysEn 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
RMSProp, root mean square propagation, is an optimization algorithm/method designed for Artificial Neural Network (ANN) training. And it is an unpublished algorithm first proposed in the Coursera course. [https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf “Neural Network for Machine Learning”] lecture six by Geoff Hinton.<sup>[9]</sup> RMSProp lies in the realm of adaptive learning rate methods, which have been growing in popularity in recent years because it is the extension of Stochastic Gradient Descent (SGD) algorithm, momentum method, and the foundation of Adam algorithm. One of the applications of RMSProp is the stochastic technology for mini-batch gradient descent. <br />
<br />
==Theory and Methodology==<br />
<br />
=== Perceptron and Neural Networks ===<br />
Perceptron is an algorithm used for supervised learning of binary classifier, and also can be regard as the simplify version/single layer of the Artificial Neural Network (ANN) to better understanding the neural network, which function is to imitate the human brain and conscious center function in Artificial Intelligence(AI) and present the small unit behavior in neural system when human thinking. The basis form of the perceptron consists inputs, weights, bias, net sum and activation function. <br />
[[File:Screen Shot 2020-12-14 at 01.09.28.png|thumb|Basis form of perceptron ]]<br />
<br />
<br />
The process of the perceptron is started by initiating input value <math>x_{1},x_{2} </math> and multiplying them by their weights to obtain <math>w_{1}, w_{2} </math>. All of the weights will be added up together to create the weight sum<math> \sum_i w_{i} </math>. And the weighted sum is then applied to the activation function <math>f </math> to produce the perceptron's output. <br />
<br />
<br />
A neural network works similarly to the human brain’s neural network. A “neuron” in a neural network is a mathematical function that collects and classifies information according to a specific architecture. A neural network contains layers of interconnected nodes, which can be regards as the perception and is similar to the multiple linear regression. The perceptron transfers the signal by a multiple linear regression into an activation function which may be nonlinear.<br />
<br />
=== '''RProp''' ===<br />
RProp, or we call Resilient Back Propagation, is the widely used algorithm for supervised learning with multi-layered feed-forward networks. The basic concept of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary error. The derivatives equation of error function can be represented as:<br />
<br />
<math> \frac{\partial E}{\partial w_{ij}} = \frac{\partial E}{\partial s_{i}} \frac{\partial s_{i}}{\partial net_{i}} \frac{\partial net_{i}}{\partial w_{ij}}</math><br />
<br />
Where <math>w_{ij}</math> is the weight from neuron <math>j</math> to neuron <math>i</math>, <math>s_{i}</math> is the output , and <math>net_{i}</math> is the weighted sum of the inputs of neurons <math>i</math>. Once the weight of each partial derivatives is known, the error function can be presented by performing a simple gradient descent:<br />
<br />
<math>w_{ij}(t+1) = w_{ij}(t) - \epsilon \frac{\partial E}{\partial w_{ij}}(t)</math><br />
<br />
The choice of the learning rate <math>\epsilon</math>, which scales the derivative, has an important effect on the time needed until convergence is reached. If it is set too small, too many steps are needed to reach an acceptable solution; on the contrary, a large learning rate will possibly lead to oscillation, preventing the error to fall below a certain value<sup>[7]</sup>.<br />
<br />
In addition, RProp can combine the method with momentum method, to prevent above problem and to accelerate the convergence rate, the equation can rewrite as:<br />
<br />
<math> \Delta w_{ij}(t) = \epsilon \frac{\partial E}{\partial w_{ij}}(t) + \mu \Delta w_{ij}(t-1) </math><br />
<br />
However, It turns out that the optimal value of the momentum parameter <math>\mu</math> in above equation is equally problem dependent as the learning rate <math>\epsilon</math>, and that no general improvement can be accomplished. Besides, RProp algorithm is not function well when we have very large datasets and need to perform mini-batch weights updates. Therefore, scientist proposal a novel algorithm, RMSProp, which can cover more scenarios than RProp.<br />
<br />
=== '''RMSProp''' ===<br />
RProp algorithm does not work for mini-batches is because it violates the central idea behind stochastic gradient descent, when we have a small enough learning rate, it averages the gradients over successive mini-batches. To solve this issue, consider the weight, that gets the gradient 0.1 on nine mini-batches, and the gradient of -0.9 on tenths mini-batch, RMSProp did force those gradients to roughly cancel each other out, so that the stay approximately the same when computing.<br />
<br />
By using the sign of gradient from RProp algorithm, and the mini-batches efficiency, and averaging over mini-batches which allows combining gradients in the right way. RMSProp keep moving average of the squared gradients for each weight. And then we divide the gradient by square root the mean square.<br />
<br />
The updated equation can be performed as:<br />
<br />
<math>E[g^2](t) = \beta E[g^2](t-1) + (1- \beta) (\frac{\partial c}{\partial w})^2</math><br />
<br />
<math>w_{ij}(t) = w_{ij}(t-1) - \frac{ \eta }{ \sqrt{E[g^2]}} \frac{\partial c}{\partial w_{ij}} </math><br />
<br />
where <math>E[g] </math> is the moving average of squared gradients, <math> \delta c / \delta w </math> is gradient of the cost function with respect to the weight, <math>\eta </math> is the learning rate and <math>\beta<br />
</math> is moving average parameter (default value — 0.9, to make the sum of default gradient value 0.1 on nine mini-batches and -0.9 on tenths is approximate zero, and the default value <math>\eta </math> is 0.001 as per experience).<br />
<br />
==Numerical Example==<br />
For the simple unconstrained optimization problem <math>min f(x) = 0.1x_{1}^2 +2x_{2}^2 </math> : <br />
<br />
settle <math>\beta<br />
</math> = 0.9, <math>\eta </math> = 0.4, , and transform the optimization problem to the standard RMSProp form, the equations are presented as below: <br />
[[File:Trajectory.png|alt=the visualization of the trajectory of RMSProp algorithm|thumb|The visualization of the trajectory of RMSProp algorithm]]<br />
<math>\frac{\partial c_{1}}{\partial w_{2}}, \frac{\partial c_{2}}{\partial w_{2}} = 0.2x_{1}, 4x_{2}<br />
<br />
</math> <br />
<br />
<math>E_{1}(t) = 0.9 E_{1}(t-1) + (1 - 0.9)(\frac{\partial c_{1}}{\partial w_{1}})^2</math> <br />
<br />
<math>E_{2}(t) = 0.9 E_{2}(t-1) + (1 - 0.9)(\frac{\partial c_{2}}{\partial w_{2}})^2</math> <br />
<br />
<math>w_{1}(t) = w_{1}(t-1) - \frac{0.4}{ \sqrt{E_{1}}} \frac{\partial c_{1}}{\partial w_{1}}</math> <br />
<br />
<math>w_{2}(t) = w_{2}(t-1) - \frac{0.4}{ \sqrt{E_{1}}} \frac{\partial c_{2}}{\partial w_{2}}</math> <br />
<br />
while using programming language to help us to solve optimization problem and visualize the trajectory of RMSProp algorithm, we can observe that the curve converge to a certain point. For this particular question, minimize solution <math>0 </math> will be obtained with <math>(x_{1}, x_{2}) </math> is <math>(0, 0) </math>. [[File:1 - 2dKCQHh - Long Valley.gif|thumb|Visualizing Optimization algorithm comparing convergence with similar algorithm<sup>[1]</sup>]] <br />
<br />
== Applications and Discussion ==<br />
[[File:2 - pD0hWu5 - Beale's function.gif|thumb|Visualizing Optimization algorithm comparing convergence with similar algorithm<sup>[1]</sup>]]<br />
The applications of RMSprop concentrate on the optimization with complex function like the neural network, or the non-convex optimization problem with adaptive learning rate, and widely used in the stochastic problem. The RMSprop optimizer restricts the oscillations in the vertical direction. Therefore, we can increase the learning rate or the algorithm could take larger steps in the horizontal direction converging to faster the similar approach gradient descent algorithm combine with momentum method.<br />
<br />
In the first visualization scheme, the gradients based optimization algorithm has a different convergence rate. As the visualizations are shown, without scaling based on gradient information algorithms are hard to break the symmetry and converge rapidly. RMSProp has a relative higher converge rate than SGD, Momentum, and NAG, beginning descent faster, but it is slower than Ada-grad, Ada-delta, which are the Adam based algorithm. In conclusion, when handling the large scale/gradients problem, the scale gradients/step sizes like Ada-delta, Ada-grad, and RMSProp perform better with high stability.<br />
<br />
Ada-grad adaptive learning rate algorithms that look a lot like RMSProp. Ada-grad adds element-wise scaling of the gradient-based on the historical sum of squares in each dimension. This means that we keep a running sum of squared gradients, and then we adapt the learning rate by dividing it by the sum to get the result. Considering the concepts in RMSProp widely used in other machine learning algorithms, we can say that it has high potential to coupled with other methods such as momentum,...etc. <br />
<br />
== Conclusion==<br />
RMSProp, root mean squared propagation is the optimization machine learning algorithm to train the Artificial Neural Network (ANN) by different adaptive learning rate and derived from the concepts of gradients descent and RProp. Combining averaging over mini-batches, efficiency, and the gradients over successive mini-batches, RMSProp can reach the faster convergence rate than the original optimizer, but lower than the advanced optimizer such as Adam. As knowing the high performance of RMSProp and possibility of combining with other algorithm, harder problem could be better described and converged in the future.<br />
<br />
==Reference==<br />
<br />
1. A. Radford, "[https://imgur.com/a/Hqolp#NKsFHJb Visualizing Optimization Algos (open sourse)".] <br />
<br />
2. R. Yamashita, M Nishio and R KGian, "Convolutional neural networks: an overview and application in radiology", pp. 9:611–629, 2018.[[File:3 - NKsFHJb - Saddle Point.gif|thumb|Visualizing Optimization algorithm comparing convergence with similar algorithm<sup>[1]</sup>]]3. V. Bushave, "Understanding RMSprop — faster neural network learning", 2018.<br />
<br />
4. V. Bushave, "How do we ‘train’ neural networks ?", 2017.<br />
<br />
5. S. Ruder, "An overview of gradient descent optimization algorithms" ,2016.<br />
<br />
6. R. Maksutov, "Deep study of a not very deep neural network. Part 3a: Optimizers overview", 2018.<br />
<br />
7. M. Riedmiller, H Braun, "A Direct Adaptive Method for Faster Back-propagation Learning: The RPROP Algorithm", pp.586-591, 1993.<br />
<br />
8. D. Garcia-Gasulla, "An Out-of-the-box Full-network Embedding for Convolutional Neural Networks" pp.168-175, 2018.<br />
<br />
9. [https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf Geoffrey Hinton, "Coursera Neural Networks for Machine Learning lecture 6", 2018.] <br />
<br />
10. [https://www.programcreek.com/python/example/104283/keras.optimizers.RMSprop Python keras.optimizers.RMSprop() Examples.]<br />
<br />
11. [https://d2l.ai/chapter_optimization/rmsprop.html RMSProp Algorithm Implementation Example.]<br />
<br />
12. S.De, A. Mukherjee, and E. Ullah, "Convergence guarantees for RMSProp and Adam in non-convex optimization and and empirical comparison to Nesterov acceleration", conference paper at ICLR, 2019.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27152020 Cornell Optimization Open Textbook Feedback2020-12-21T10:55:55Z<p>Wc593: /* Adaptive robust optimization */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27142020 Cornell Optimization Open Textbook Feedback2020-12-21T10:54:23Z<p>Wc593: /* Branch and cut */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27132020 Cornell Optimization Open Textbook Feedback2020-12-21T10:53:27Z<p>Wc593: /* Branch and cut */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Branch_and_cut&diff=2712Branch and cut2020-12-21T10:53:15Z<p>Wc593: /* Strong Branching: */</p>
<hr />
<div>Author: Lindsay Siegmundt, Peter Haddad, Chris Babbington, Jon Boisvert, Haris Shaikh (SysEn 6800 Fall 2020)<br />
<br />
Steward: Wei-Han Chen, Fengqi You<br />
<br />
== Introduction ==<br />
The Branch and Cut methodology was discovered in the 90s as a way to solve/optimize Mixed-Integer Linear Programs (Karamanov, Miroslav)<ref>Karamanov, Miroslav. “Branch and Cut: An Empirical Study.” ''Carnegie Mellon University'' , Sept. 2006, https://www.cmu.edu/tepper/programs/phd/program/assets/dissertations/2006-operations-research-karamanov-dissertation.pdf.</ref>. This concept is comprised of two known optimization methodologies - Branch and Bound and Cutting Planes. Utilizing these two tools allows for the Branch and Cut to find an optimal solution through relaxing the problem to produce the upper bound. Relaxing the problem allows for the complex problem to be simplified in order for it to be solve more easily. Furthermore, the upper bound represents the highest value the objective can take in order to be feasible. The optimal solution is found when the objective is equal to the upper bound (Luedtke, Jim)<ref>Luedtke, Jim. “The Branch-and-Cut Algorithm for Solving Mixed-Integer Optimization Problems.” ''Institute for Mathematicians and Its Applications'', 10 Aug. 2016, https://www.ima.umn.edu/materials/2015-2016/ND8.1-12.16/25397/Luedtke-mip-bnc-forms.pdf.</ref>. This methodology is critical to the future of optimization since it combines two common tools in order to utilize each component in order to find the optimal solution. Moving forward, the critical components of different methodologies could be combined in order to find optimality in a more simple and direct manner. <br />
<br />
== Methodology & Algorithm ==<br />
<br />
=== Methodology ===<br />
{| class="wikitable"<br />
|+Abbreviation Details<br />
!Acronym<br />
!Expansion<br />
|-<br />
|LP<br />
|Linear Programming<br />
|-<br />
|B&B<br />
|Branch and Bound<br />
|}<br />
<br />
==== Most Infeasible Branching: ====<br />
Most infeasible branching is a very popular method that picks the variable with fractional part closest to <math>0:5</math>, i.e.,<math> si = 0:5-|xA_i- xA_i-0:5|</math><ref>Branching rules revisited Tobias Achterberga;∗, Thorsten Kocha, Alexander Martinb https://www-m9.ma.tum.de/downloads/felix-klein/20B/AchterbergKochMartin-BranchingRulesRevisited.pdf</ref>. Most infeasible branching picks a variable where the least tendency can be recognized to which side the variable should be rounded. However, the performance of this method is not any superior to the rule of selecting a variable randomly.<br />
<br />
==== '''Strong Branching:''' ====<br />
For each fractional variable, strong branching tests the dual bound increase by computing the LP relaxations result from the branching on that variable. As a branching variable for the current node, the variable that leads to the largest increases is selected. Despite its obvious simplicity, strong branching is so far the most powerful branching technique in terms of the number of nodes available in the B&B tree, this effectiveness can however be accomplished only at the cost of computation.<ref>A Branch-and-Cut Algorithm for Mixed Integer Bilevel Linear Optimization Problems and Its Implementation<nowiki/>https://coral.ise.lehigh.edu/~ted/files/papers/MIBLP16.pdf</ref><br />
<br />
==== '''Pseudo Cost:''' ====<br />
[[File:Image.png|thumb|Pure psuedo cost branching]]<br />
<br />
Another way to approximate a relaxation value is by utilizing a pseudo cost method. The pseudo-cost of a variable is an estimate of the per unit change in the objective function from making the value of the variable to be rounded up or down. For each variable we choose variable with the largest estimated LP objective gain<ref>Advances in Mixed Integer Programming http://scip.zib.de/download/slides/SCIP-branching.ppt</ref>. <br />
==='''Algorithm'''===<br />
Branch and Cut for is a variation of the Branch and Bound algorithm. Branch and Cut incorporates Gomery cuts allowing the search space of the given problem. The standard Simplex Algorithm will be used to solve each Integer Linear Programming Problem (LP).<br />
<br />
<br />
<math>min: c^tx<br />
</math><br />
<br />
<math>s.t. Ax < b<br />
</math><br />
<br />
<math>x \geq 0<br />
</math><br />
<br />
<math>x_i = int, i = 1,2,3...,n<br />
</math><br />
<br />
Above is a mix-integer linear programming problem. x and c are a part of the n-vector. These variables can be set to 0 or 1 allow binary variables. The above problem can be denoted as <math>LP_n </math><br />
<br />
Below is an Algorithm to utilize the Branch and Cut algorithm with Gomery cuts and Partitioning:<br />
<br />
'''Step 0:'''<br />
Upper Bound = ∞<br />
Lower Bound = -∞<br />
'''Step 1. Initialize:'''<br />
<br />
Set the first node as <math>LP_0</math> while setting the active nodes set as <math>L</math>. The set can be accessed via <math>LP_n </math><br />
<br />
===='''Step 2. Terminate:'''====<br />
Step 3. Iterate through list L:<br />
<br />
While <math>L</math> is not empty (i is the index of the list of L), then:<br />
<br />
'''Step 3.1. Convert to a Relaxation:'''<br />
<br />
'''Solve 3.2.'''<br />
<br />
Solve for the Relaxed<br />
<br />
'''Step 3.3.'''<br />
If Z is infeasible:<br />
Return to step 3.<br />
else:<br />
Continue with solution Z.<br />
'''Step 4. Cutting Planes:'''<br />
If a cutting plane is found:<br />
then add to the Linear Relaxation problem (as a constraint) and return to step 3.2<br />
Else:<br />
Continue.<br />
'''Step 5. Pruning and Fathoming:'''<br />
<br />
(a)If ≥ Z:, then go to step 3.<br />
If Z^l <= Z AND X_i is an integral feasible:<br />
Z = Z^i<br />
Remove all Z^i from Set(L)<br />
'''Step 6. Partition'''<br />
<br />
Let <math>D^{lj=k}_{j=1}</math> be a partition of the constraint set <math>D</math> of problem <math>LP_l</math>. Add problems <math>D^{lj=k}_{j=1}</math> to L, where <math>LP^l_j</math> is <math>LP_l</math> with feasible region restricted to <math>D^l_j</math> and <math>Z_{lj}</math> for j=1,...k is set to the value of <math>Z^l</math> for the parent problem l. Go to step 3.<ref name=":0">Benders, J. F. (Sept. 1962), "Partitioning procedures for solving mixed-variables programming problems", Numerische Mathematik 4(3): 238–252.</ref><br />
<br />
==Numerical Example==<br />
First, list out the MILP:<br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to original LP<br />
<br />
<math>z =-19.56, x_1=1.88, x_2=1.72 </math><br />
<br />
<br />
Branch on x<sub>1</sub> to generate sub-problems<br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>x_1\geq2</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to fist branch sub-problem<br />
<br />
<math>z =-15, x_1=2, x_2=1</math><br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>x_1\leq1</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to second branch sub-problem<br />
<br />
<math>z =-14.5, x_1=1, x_2=1.5</math><br />
<br />
Adding a cut<br />
<br />
<math>min \ z=-4x_1-7x_2</math><br />
<br />
<math>6x_1 + x_2 \leq13</math><br />
<br />
<math>-x_1+4x_2\leq5</math><br />
<br />
<math>2x_1+x_2\leq 3</math><br />
<br />
<math>x_1\leq1</math><br />
<br />
<math>x_1,x_2\geq0</math><br />
<br />
Solution to cut LP<br />
<br />
<math>z=-13.222,x_1=.778,x_2=1.444</math><br />
<br />
==Application==<br />
Several of the Branch and Cut applications are described below in more detail and how they can be used. These applications serve as methods in which Branch and Cut can be used to optimize various problems efficiently.<br />
<br />
=== '''Combinatorial Optimization''' ===<br />
Combinatorial Optimization is a great application for Branch and Cut. This style of optimization is the methodology of utilizing the finite known sets and information of the sets to optimize the solution. The original intent for this application was for maximizing flow as well as in the transportation industry (Maltby and Ross). This combinatorial optimization has also taken on some new areas where it is used often. Combinatorial Optimization is now an imperative component in studying artificial intelligence and machine learning algorithms to optimize solutions. The finite sets that Combinatorial Optimization tends to utilize and focus on includes graphs, partially ordered sets, and structures that define linear independence call matroids.<ref>[https://brilliant.org/wiki/combinatorial-optimization/ Maltby, Henry, and Eli Ross. “Combinatorial Optimization.” ''Brilliant Math & Science Wiki'', https://brilliant.org/wiki/combinatorial-optimization/.]</ref><br />
<br />
=== '''Bender’s Decomposition''' ===<br />
Bender’s Decomposition is another Branch and Cut application that is utilized widely in Stochastic Programming. Bender’s Decomposition is where you take the initial problem and divide into two distinct subsets. By dividing the problem into two separate problems you are able to solve each set easier than the original instance (Benders). Therefore the first problem within the subset created can be solved for the first variable set. The second sub problem is then solved for, given that first problem solution. Doing this allows for the sub problem to be solved to determine whether the first problem is infeasible (Benders). Bender’s cuts can be added to constrain the problem until a feasible solution can be found.<ref name=":0" /><br />
<br />
=== '''Large-Scale Symmetric Traveling Salesmen Problem''' ===<br />
The Large-Scale Symmetric Traveling Salesmen Problem is a common problem that was always looked into optimizing for the shortest route while visiting each city once and returning to the original city at the end. On a larger scale this style of problem must be broken down into subsets or nodes (SIAM). By constraining this style of problem such as the methods of Combinatorial Optimization, the Traveling Salesmen Problem can be viewed as partially ordered sets. By doing this on a large scale with finite cities you are able to optimize the shortest path taken and ensure each city is only visited once.<ref>Society for Industrial and Applied Mathematics. “SIAM Rev.” ''SIAM Review'', 18 July 2006, https://epubs.siam.org/doi/10.1137/1033004</ref><br />
<br />
=== '''Submodular Function''' ===<br />
Submodular Function is another function in which is used throughout artificial intelligence as well as machine learning. The reason for this is because as inputs are increased into the function the value or outputs decrease. This allows for a great optimization features in the cases stated above because inputs are continually growing. This allows for machine learning and artificial intelligence to continue to grow based on these algorithms (Tschiatschek, Iyer, and Bilmes)<ref>S. Tschiatschek, R. Iyer, H. Wei and J. Bilmes, Learning Mixtures of Submodular Functions for Image Collection Summarization, NIPS-2014.</ref>. By enforcing new inputs to the system the system will learn more and more to ensure it optimizes the solution that is to be made.<ref>A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008</ref><br />
<br />
==Conclusion==<br />
The Branch and Cut is an optimization algorithm used to optimize integer linear programming. It combines two other optimization algorithms - branch and bound and cutting planes in order to utilize the results from each method in order to create the most optimal solution. There are three different methodologies used within the specific method - most infeasible branching, strong branching, and pseudo code. Furthermore, Branch and Cut can be utilized it multiple scenarios - Submodular function, large-scale symmetric traveling salesmen problem, bender's decomposition, and combination optimization which increases the impact of the methodology. <br />
<br />
==Reference==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27112020 Cornell Optimization Open Textbook Feedback2020-12-21T10:51:38Z<p>Wc593: /* Heuristic algorithms */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Heuristic_algorithms&diff=2710Heuristic algorithms2020-12-21T10:51:16Z<p>Wc593: </p>
<hr />
<div>Author: Anmol Singh (as2753)<br />
<br />
Steward: Fengqi You, Allen Yang<br />
<br />
== Introduction ==<br />
In mathematical programming, a heuristic algorithm is a procedure that determines near-optimal solutions to an optimization problem. However, this is achieved by trading optimality, completeness, accuracy, or precision for speed.<ref> Eiselt, Horst A et al. Integer Programming and Network Models. Springer, 2011.</ref> Nevertheless, heuristics is a widely used technique for a variety of reasons:<br />
<br />
*Problems that do not have an exact solution or for which the formulation is unknown<br />
*The computation of a problem is computationally intensive<br />
*Calculation of bounds on the optimal solution in branch and bound solution processes<br />
==Methodology==<br />
Optimization heuristics can be categorized into two broad classes depending on the way the solution domain is organized:<br />
<br />
===Construction methods (Greedy algorithms)===<br />
The greedy algorithm works in phases, where the algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem.<ref><br />
''Introduction to Algorithms'' (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 16 "Greedy Algorithms".</ref> It is a technique used to solve the famous “traveling salesman problem” where the heuristic followed is: "At each step of the journey, visit the nearest unvisited city." <br />
<br />
====Example: Scheduling Problem====<br />
You are given a set of N schedules of lectures for a single day at a university. The schedule for a specific lecture is of the form (s time, f time) where s time represents the start time for that lecture, and similarly, the f time represents the finishing time. Given a list of N lecture schedules, we need to select a maximum set of lectures to be held out during the day such that none of the lectures overlaps with one another i.e. if lecture L<sub>i</sub> and L<sub>j</sub> are included in our selection then the start time of j ≥ finish time of i or vice versa. The most optimal solution to this would be to consider the earliest finishing time first. We would sort the intervals according to the increasing order of their finishing times and then start selecting intervals from the very beginning. <br />
<br />
===Local Search methods===<br />
The Local Search method follows an iterative approach where we start with some initial solution, explore the neighborhood of the current solution, and then replace the current solution with a better solution.<ref> Eiselt, Horst A et al. Integer Programming and Network Models. Springer, 2011.</ref> For this method, the “traveling salesman problem” would follow the heuristic in which a solution is a cycle containing all nodes of the graph and the target is to minimize the total length of the cycle.<br />
<br />
==== Example Problem ====<br />
Suppose that the problem P is to find an optimal ordering of N jobs in a manufacturing system. A solution to this problem can be described as an N-vector of job numbers, in which the position of each job in the vector defines the order in which the job will be processed. For example, [3, 4, 1, 6, 5, 2] is a possible ordering of 6 jobs, where job 3 is processed first, followed by job 4, then job 1, and so on, finishing with job 2. Define now M as the set of moves that produce new orderings by the swapping of any two jobs. For example, [3, 1, 4, 6, 5, 2] is obtained by swapping the positions of jobs 4 and 1.<br />
==Popular Heuristic Algorithms==<br />
<br />
===Genetic Algorithm===<br />
The term Genetic Algorithm was first used by John Holland.<ref>J.H. Holland (1975) ''Adaptation in Natural and Artificial Systems,'' University of Michigan Press, Ann Arbor, Michigan; re-issued by MIT Press (1992).</ref> They are designed to mimic the Darwinian theory of evolution, which states that populations of species evolve to produce more complex organisms and fitter for survival on Earth. Genetic algorithms operate on string structures, like biological structures, which are evolving in time according to the rule of survival of the fittest by using a randomized yet structured information exchange. Thus, in every generation, a new set of strings is created, using parts of the fittest members of the old set.<ref>Optimal design of heat exchanger networks, Editor(s): Wilfried Roetzel, Xing Luo, Dezhen Chen, Design and Operation of Heat Exchangers and their Networks, Academic Press, 2020, Pages 231-317, <nowiki>ISBN 9780128178942</nowiki>, https://doi.org/10.1016/B978-0-12-817894-2.00006-6.</ref> The algorithm terminates when the satisfactory fitness level has been reached for the population or the maximum generations have been reached. The typical steps are<ref>Wang FS., Chen LH. (2013) Genetic Algorithms. In: Dubitzky W., Wolkenhauer O., Cho KH., Yokota H. (eds) Encyclopedia of Systems Biology. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9863-7_412 </ref>:<br />
<br />
1. Choose an initial population of candidate solutions<br />
<br />
2. Calculate the fitness, how well the solution is, of each individual<br />
<br />
3. Perform crossover from the population. The operation is to randomly choose some pair of individuals like parents and exchange so parts from the parents to generate new individuals<br />
<br />
4. Mutation is to randomly change some individuals to create other new individuals<br />
<br />
5. Evaluate the fitness of the offspring<br />
<br />
6. Select the survive individuals<br />
<br />
7. Proceed from 3 if the termination criteria have not been reached<br />
<br />
===Tabu Search Algorithm===<br />
Tabu search (TS) is a heuristic algorithm created by Fred Glover<ref>Fred Glover (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research. '''13''' (5): 533–549,https://doi.org/10.1016/0305-0548(86)90048-1</ref> using a gradient-descent search with memory techniques to avoid cycling for determining an optimal solution. It does so by forbidding or penalizing moves that take the solution, in the next iteration, to points in the solution space previously visited. The algorithm spends some memory to keep a Tabu list of forbidden moves, which are the moves of the previous iterations or moves that might be considered unwanted. A general algorithm is as follows<ref>Optimization of Preventive Maintenance Program for Imaging Equipment in Hospitals, Editor(s): Zdravko Kravanja, Miloš Bogataj, Computer-Aided Chemical Engineering, Elsevier, Volume 38, 2016, Pages 1833-1838, ISSN 1570-7946, <nowiki>ISBN 9780444634283</nowiki>, https://doi.org/10.1016/B978-0-444-63428-3.50310-6.</ref>: <br />
<br />
1. Select an initial solution ''s''<sub>0</sub> ∈ ''S''. Initialize the Tabu List ''L''<sub>0</sub> = ∅ and select a list tabu size. Establish ''k'' = 0.<br />
<br />
2. Determine the neighborhood feasibility ''N''(''s<sub>k</sub>'') that excludes inferior members of the tabu list ''L<sub>k</sub>''.<br />
<br />
3. Select the next movement ''s<sub>k</sub>'' <sub>+ 1</sub> from ''N''(''S<sub>k</sub>'') or ''L<sub>k</sub>'' if there is a better solution and update ''L<sub>k</sub>'' <sub>+ 1</sub><br />
<br />
4. Stop if a condition of termination is reached, else, ''k'' = ''k'' + 1 and return to 1<br />
<br />
==== Example: The Classical Vehicle Routing Problem ====<br />
''Vehicle Routing Problems'' have very important applications in distribution management and have become some of the most studied problems in the combinatorial optimization literature. These include several Tabu Search implementations that currently rank among the most effective. The ''Classical Vehicle Routing Problem'' (CVRP) is the basic variant in that class of problems. It can formally be defined as follows. Let ''G'' = (''V, A'') be a graph where ''V'' is the vertex set and ''A'' is the arc set. One of the vertices represents the ''depot'' at which a fleet of identical vehicles of capacity ''Q'' is based, and the other vertices customers that need to be serviced. With each customer vertex v<sub>i</sub> are associated a demand q<sub>i</sub> and a service time t<sub>i</sub>. With each arc (v<sub>i</sub>, v<sub>j</sub>) of ''A'' are associated a cost c<sub>ij</sub> and a travel time t<sub>ij</sub>.<ref>Glover, Fred, and Gary A Kochenberger. Handbook Of Metaheuristics. Kluwer Academic Publishers, 2003.</ref> The CVRP consists of finding a set of routes such that:<br />
<br />
1. Each route begins and ends at the depot<br />
<br />
2. Each customer is visited exactly once by exactly one route<br />
<br />
3. The total demand of the customers assigned to each route does not exceed ''Q''<br />
<br />
4. The total duration of each route (including travel and service times) does not exceed a specified value ''L''<br />
<br />
5. The total cost of the routes is minimized<br />
<br />
A feasible solution for the problem thus consists of a partition of the customers into m groups, each of total demand no larger than ''Q'', that are sequenced to yield routes (starting and ending at the depot) of duration no larger than ''L''.<br />
<br />
===Simulated Annealing Algorithm===<br />
The Simulated Annealing Algorithm was developed by Kirkpatrick et. al. in 1983<ref>Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by Simulated Annealing. ''Science,'' ''220''(4598), 671-680. Retrieved November 25, 2020, from http://www.jstor.org/stable/1690046</ref> and is based on the analogy of ideal crystals in thermodynamics. The annealing process in metallurgy can make particles arrange themselves in the position with minima potential as the temperature is slowly decreased. The Simulation Annealing algorithm mimics this mechanism and uses the objective function of an optimization problem instead of the energy of a material to arrive at a solution. A general algorithm is as follows<ref>Brief review of static optimization methods, Editor(s): Stanisław Sieniutycz, Jacek Jeżowski, Energy Optimization in Process Systems and Fuel Cells (Third Edition), Elsevier, 2018, Pages 1-41, <nowiki>ISBN 9780081025574</nowiki>, https://doi.org/10.1016/B978-0-08-102557-4.00001-3.</ref> :<br />
<br />
1. Fix initial temperature (''T''<sup>0</sup>)<br />
<br />
2. Generate starting point '''x'''<sup>0</sup> (this is the best point '''''X'''''<sup>*</sup> at present)<br />
<br />
3. Generate randomly point '''''X<sup>S</sup>''''' (neighboring point)<br />
<br />
4. Accept '''''X<sup>S</sup>''''' as '''''X'''''<sup>*</sup> (currently best solution) if an acceptance criterion is met. This must be such a condition that the probability of accepting a worse point is greater than zero, particularly at higher temperatures<br />
<br />
5. If an equilibrium condition is satisfied, go to (6), otherwise jump back to (3).<br />
<br />
6. If termination conditions are not met, decrease the temperature according to a certain cooling scheme and jump back to (1). If the termination conditions are satisfied, stop calculations accepting the current best value '''''X'''''<sup>*</sup> as the final (‘optimal’) solution. <br />
<br />
== Numerical Example: Knapsack Problem ==<br />
One of the most common applications of the heuristic algorithm is the Knapsack Problem, in which a given set of items (each with a mass and a value) are grouped to have a maximum value while being under a certain mass limit. It uses the Greedy Approximation Algorithm to sort the items based on their value per unit mass and then includes the items with the highest value per unit mass if there is still space remaining.<br />
<br />
'''<big>Example</big>'''<br />
<br />
The following table specifies the weights and values per unit of five different products held in storage. The quantity of each product is unlimited. A plane with a weight capacity of 13 is to be used, for one trip only, to transport the products. We would like to know how many units of each product should be loaded onto the plane to maximize the value of goods shipped. <br />
{| class="wikitable"<br />
|+<br />
!<br />
Product (i) <br />
!Weight per unit (w<sub>i</sub>)<br />
!Value per unit (v<sub>i</sub>)<br />
|-<br />
|1<br />
|7<br />
|9<br />
|-<br />
|2<br />
|5<br />
|4<br />
|-<br />
|3<br />
|4<br />
|3<br />
|-<br />
|4<br />
|3<br />
|2<br />
|-<br />
|5<br />
|1<br />
|0.5<br />
|}<br />
'''<big>Solution:</big>'''<br />
<br />
'''(a) Stages:'''<br />
<br />
We view each type of product as a stage, so there are 5 stages. We can also add a sixth stage representing the endpoint after deciding<br />
<br />
'''(b) States:'''<br />
<br />
We can view the remaining capacity as states, so there are 14 states in each stage: 0,1, 2, 3, …13<br />
<br />
'''(c) Possible decisions at each stage:'''<br />
<br />
Suppose we are in state s in stage n (n < 6), hence there are s capacity remaining. Then the possible number of items we can pack is:<br />
<br />
j = 0, 1, …[s/w<sub>n</sub>]<br />
<br />
For each such action j, we can have an arc going from the state s in stage n to the state n – j*w<sub>n</sub> in stage n + 1. For each arc in the graph, there is a corresponding benefit j*v<sub>n</sub>. We are trying to find a maximum benefit path from state 13 in stage 1, to stage 6.<br />
<br />
'''(d) Optimization function:'''<br />
<br />
Let f<sub>n</sub>(s) be the value of the maximum benefit possible with items of type n or greater using total capacity at most s<br />
<br />
'''(e) Boundary conditions:'''<br />
<br />
The sixth stage should have all zeros, that is, f<sub>6</sub>(s) = 0 for each s = 0,1, … 13<br />
<br />
'''(f) Recurrence relation:'''<br />
<br />
f<sub>n</sub>(s) = max {j*v<sub>n</sub> + f<sub>n+1</sub>(s – j*w<sub>n</sub>)}, j = 0, 1, …, [s/w<sub>n</sub>]<br />
<br />
'''(g) Compute:'''<br />
<br />
The solution will not show all the computations steps. Instead, only a few cases are given below to illustrate the idea.<br />
<br />
* For stage 5, f<sub>5</sub>(s) = max<sub>j=0, 1, …[s/1]</sub> {j*0.5 + 0} = 0.5s because given the all zero states in stage 6, the maximum possible value is to use up all the remaining s capacity.<br />
* For stage 4, state 7,<br />
<br />
f<sub>4</sub>(7) = max<sub>j=0,1, …, [7/w4]</sub> = {j*v<sub>4</sub> + f<sub>5</sub>(7 - w<sub>4*</sub>j)}<br />
<br />
= max {0 + 3.5; 2 + 2; 4 + 0.5}<br />
<br />
= 4.5<br />
<br />
Using the recurrence relation above, we get the following table:<br />
{| class="wikitable"<br />
|+<br />
!Unused Capacity<br />
s<br />
!f<sub>1</sub>(s)<br />
!Type 1 <br />
opt<br />
!f<sub>2</sub>(s)<br />
!Type 2 <br />
opt<br />
!f<sub>3</sub>(s)<br />
!Type 3 <br />
opt<br />
!f<sub>4</sub>(s)<br />
!Type 4 <br />
opt<br />
!f<sub>5</sub>(s)<br />
!Type 5 <br />
opt<br />
!f<sub>6</sub>(s)<br />
|-<br />
|13<br />
|13.5<br />
|1<br />
|10<br />
|2<br />
|9.5<br />
|3<br />
|8.5<br />
|4<br />
|6.5<br />
|13<br />
|0<br />
|-<br />
|12<br />
|13<br />
|1<br />
|9<br />
|2<br />
|9<br />
|3<br />
|8<br />
|4<br />
|6<br />
|12<br />
|0<br />
|-<br />
|11<br />
|12<br />
|1<br />
|8.5<br />
|2<br />
|8<br />
|2<br />
|7<br />
|3<br />
|5.5<br />
|11<br />
|0<br />
|-<br />
|10<br />
|11<br />
|1<br />
|8<br />
|2<br />
|7<br />
|2<br />
|6.5<br />
|3<br />
|5<br />
|10<br />
|0<br />
|-<br />
|9<br />
|10<br />
|1<br />
|7<br />
|1<br />
|6.5<br />
|2<br />
|6<br />
|3<br />
|4.5<br />
|9<br />
|0<br />
|-<br />
|8<br />
|9.5<br />
|1<br />
|6<br />
|1<br />
|6<br />
|2<br />
|5<br />
|2<br />
|4<br />
|8<br />
|0<br />
|-<br />
|7<br />
|9<br />
|1<br />
|5<br />
|1<br />
|5<br />
|1<br />
|4.5<br />
|2<br />
|3.5<br />
|7<br />
|0<br />
|-<br />
|6<br />
|4.5<br />
|0<br />
|4.5<br />
|1<br />
|4<br />
|1<br />
|4<br />
|2<br />
|3<br />
|6<br />
|0<br />
|-<br />
|5<br />
|4<br />
|0<br />
|4<br />
|1<br />
|3.5<br />
|1<br />
|3<br />
|1<br />
|2.5<br />
|5<br />
|0<br />
|-<br />
|4<br />
|3<br />
|0<br />
|3<br />
|0<br />
|3<br />
|1<br />
|2.5<br />
|1<br />
|2<br />
|4<br />
|0<br />
|-<br />
|3<br />
|2<br />
|0<br />
|2<br />
|0<br />
|2<br />
|0<br />
|2<br />
|1<br />
|1.5<br />
|3<br />
|0<br />
|-<br />
|2<br />
|1<br />
|0<br />
|1<br />
|0<br />
|1<br />
|0<br />
|1<br />
|0<br />
|1<br />
|2<br />
|0<br />
|-<br />
|1<br />
|0.5<br />
|0<br />
|0.5<br />
|0<br />
|0.5<br />
|0<br />
|0.5<br />
|0<br />
|0.5<br />
|1<br />
|0<br />
|-<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|0<br />
|}<br />
'''Optimal solution:''' The maximum benefit possible is 13.5. Tracing forward to get the optimal solution: the optimal decision corresponding to the entry 13.5 for f<sub>1</sub>(1) is 1, therefore we should pack 1 unit of type 1. After that we have 6 capacity remaining, so look at f<sub>2</sub>(6) which is 4.5, corresponding to the optimal decision of packing 1 unit of type 2. After this, we have 6-5 = 1 capacity remaining, and f<sub>3</sub>(1) = f<sub>4</sub>(1) = 0, which means we are not able to pack any type 3 or type 4. Hence we go to stage 5 and find that f<sub>5</sub>(1) = 1, so we should pack 1 unit of type 5. This gives the entire optimal solution as can be seen in the table below:<br />
{| class="wikitable"<br />
|+<br />
! colspan="2" |Optimal solution<br />
|-<br />
!Product (i)<br />
!Number of units<br />
|-<br />
|1<br />
|1<br />
|-<br />
|2<br />
|1<br />
|-<br />
|5<br />
|1<br />
|}<br />
<br />
==Applications==<br />
Heuristic algorithms have become an important technique in solving current real-world problems. Its applications can range from optimizing the power flow in modern power systems<ref> NIU, M., WAN, C. & Xu, Z. A review on applications of heuristic optimization algorithms for optimal power flow in modern power systems. J. Mod. Power Syst. Clean Energy 2, 289–297 (2014), https://doi.org/10.1007/s40565-014-0089-4</ref> to groundwater pumping simulation models<ref> J. L. Wang, Y. H. Lin and M. D. Lin, "Application of heuristic algorithms on groundwater pumping source identification problems," 2015 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 2015, pp. 858-862, https://doi.org/10.1109/IEEM.2015.7385770.</ref>. Heuristic optimization techniques are increasingly applied in environmental engineering applications as well such as the design of a multilayer sorptive barrier system for landfill liner.<ref>Matott, L. Shawn, et al. “Application of Heuristic Optimization Techniques and Algorithm Tuning to Multilayered Sorptive Barrier Design.” Environmental Science &amp; Technology, vol. 40, no. 20, 2006, pp. 6354–6360., https://doi.org/10.1021/es052560+.</ref> Heuristic algorithms have also been applied in the fields of bioinformatics, computational biology, and systems biology.<ref>Larranaga P, Calvo B, Santana R, Bielza C, Galdiano J, Inza I, Lozano JA, Armananzas R, Santafe G, Perez A, Robles V (2006) Machine learning in bioinformatics. Brief Bioinform 7(1):86–112 </ref><br />
<br />
==Conclusion==<br />
Heuristic algorithms are not a panacea, but they are handy tools to be used when the use of exact methods cannot be implemented. Heuristics can provide flexible techniques to solve hard problems with the advantage of simple implementation and low computational cost. Over the years, we have seen a progression in heuristics with the development of hybrid systems that combine selected features from various types of heuristic algorithms such as tabu search, simulated annealing, and genetic or evolutionary computing. Future research will continue to expand the capabilities of existing heuristics to solve complex real-world problems.<br />
<br />
==References==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27092020 Cornell Optimization Open Textbook Feedback2020-12-21T10:48:42Z<p>Wc593: /* Column generation algorithms */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Please use proper symbol for "greater than or equal to".<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Column_generation_algorithms&diff=2708Column generation algorithms2020-12-21T10:46:30Z<p>Wc593: </p>
<hr />
<div>Author: Lorena Garcia Fernandez (lgf572)<br />
<br />
== Introduction ==<br />
Column Generation techniques have the scope of solving large linear optimization problems by generating only the variables that will have an influence on the objective function. This is important for big problems with many variables where the formulation with these techniques would simplify the problem formulation, since not all the possibilities need to be explicitly listed.<ref>Desrosiers, Jacques & Lübbecke, Marco. (2006). A Primer in Column Generation.p7-p14 10.1007/0-387-25486-2_1. </ref><br />
<br />
== Theory, methodology and algorithmic discussions ==<br />
'''''Theory'''''<br />
<br />
The way this method work is as follows; first, the original problem that is being solved needs to be split into two problems: the master problem and the sub-problem.<br />
<br />
* The master problem is the original column-wise (i.e: one column at a time) formulation of the problem with only a subset of variables being considered.<ref><br />
AlainChabrier, Column Generation techniques, 2019 URL: https://medium.com/@AlainChabrier/column-generation-techniques-6a414d723a64<br />
</ref><br />
<br />
* The sub-problem is a new problem created to identify a new promising variable. The objective function of the sub-problem is the reduced cost of the new variable with respect to the current dual variables, and the constraints require that the variable obeys the naturally occurring constraints. The subproblem is also referred to as the RMP or “restricted master problem”. From this we can infer that this method will be a good fit for problems whose constraint set admit a natural breakdown (i.e: decomposition) into sub-systems representing a well understood combinatorial structure.<ref><br />
AlainChabrier, Column Generation techniques, 2019 URL: https://medium.com/@AlainChabrier/column-generation-techniques-6a414d723a64<br />
</ref><br />
<br />
To execute that decomposition from the original problem into Master and subproblems there are different techniques. The theory behind this method relies on the Dantzig-Wolfe decomposition.<ref>Dantzig-Wolfe decomposition. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dantzig-Wolfe_decomposition&oldid=50750</ref><br />
<br />
In summary, when the master problem is solved, we are able to obtain dual prices for each of the constraints in the master problem. This information is then utilized in the objective function of the subproblem. The subproblem is solved. If the objective value of the subproblem is negative, a variable with negative reduced cost has been identified. This variable is then added to the master problem, and the master problem is re-solved. Re-solving the master problem will generate a new set of dual values, and the process is repeated until no negative reduced cost variables are identified. The subproblem returns a solution with non-negative reduced cost, we can conclude that the solution to the master problem is optimal.<ref>Wikipedia, the free encyclopeda. Column Generation. URL: https://en.wikipedia.org/wiki/Column_generation</ref><br />
<br />
'''''Methodology'''''<ref>L.A. Wolsey, Integer programming. Wiley,Column Generation Algorithms p185-p189,1998</ref><br />
[[File:Column Generation.png|thumb|468x468px|Column generation schematics<ref name=":4">GERARD. (2005). Personnel and Vehicle scheduling, Column Generation, slide 12. URL: https://slideplayer.com/slide/6574/</ref>]]<br />
Consider the problem in the form:<br />
<br />
(IP) <br />
<math>z=max\left \{\sum_{k=1}^{K}c^{k}x^{k}:\sum_{k=1}^{K}A^{k}x^{k}=b,x^{k}\epsilon X^{k}\; \; \; for\; \; \; k=1,...,K \right \}</math><br />
<br />
<br />
Where <math>X^{k}=\left \{x^{k}\epsilon Z_{+}^{n_{k}}: D^{k}x^{k}\leq d^{_{k}} \right \}</math> for <math>k=1,...,K</math>. Assuming that each set <math>X^{k}</math> contains a large but finite set of points <math>\left \{ x^{k,t} \right \}_{t=1}^{T_{k}}</math>, we have that <math>X^{k}=</math>:<br />
<br />
<math>\left \{ x^{k}\epsilon R^{n_{k}}:x^{k}=\sum_{t=1}^{T_{k}}\lambda _{k,t}x^{k,t},\sum_{t=1}^{T_{k}}\lambda _{k,t}=1,\lambda _{k,t}\epsilon \left \{ 0,1 \right \}for \; \; k=1,...,K \right \}</math><br />
<br />
Note that, on the assumption that each of the sets <math>X^{k}=</math> is bounded for <math>k=1,...,K</math> the approach will involve solving an equivalent problem of the form as below:<br />
<br />
<math>max\left \{ \sum_{k=1}^{K}\gamma ^{k}\lambda ^{k}: \sum_{k=1}^{K}B^{k}\lambda ^{k}=\beta ,\lambda ^{k}\geq 0\; \; integer\; \; for\; \; k=1,...,K \right \}</math><br />
<br />
where each matrix <math>B^{k}</math> has a very large number of columns, one for each of the feasible points in <math>X^{k}</math>, and each vector <math>\lambda ^{k}</math> contains the corresponding variables.<br />
<br />
<br />
Now, substituting for <math>x^{k}=</math> leads to an equivalent ''IP Master Problem (IPM)'':<br />
<br />
(IPM)<br />
<math>\begin{matrix}<br />
z=max\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left(c^{k}x^{k,t}\right )\lambda _{k,t} \\ \sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( A^{k}x^{k,t} \right )\lambda _{k,t}=b\\<br />
\sum_{t=1}^{T_{k}}\lambda _{k,t}=1\; \; for\; \; k=1,...,K \\<br />
\lambda _{k,t}\epsilon \left \{ 0,1 \right \}\; \; for\; \; t=1,...,T_{k}\; \; and\; \; k=1,...,K.<br />
\end{matrix}</math><br />
<br />
To solve the Master Linear Program, we use a column generation algorithm. This is in order to solve the linear programming relaxation of the Integer Programming Master Problem, called the ''Linear Programming Master Problem (LPM)'':<br />
<br />
(LPM)<br />
<math>\begin{matrix}<br />
z^{LPM}=max\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( c^{k}x^{k,t} \right )\lambda _{k,t}\\<br />
\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( A^{k}x^{k,t} \right )\lambda _{k,t}=b \\<br />
\sum_{t=1}^{T_{k}}\lambda _{k,t}=1\; \;for\; \; k=1,...,K \\<br />
\lambda _{k,t} \geq 0\; \; for\; \; t=1,...,T_{k},\; k=1,...,K<br />
\end{matrix}</math><br />
<br />
Where there is a column <math>\begin{pmatrix}<br />
c^{k}x\\ <br />
A^{k}x\\ <br />
e_{k}<br />
\end{pmatrix}</math> for each ''<math>x</math>'' ''<math display="inline">\in</math> <math display="inline">X^{k}</math>''. On the next steps of this method, we will use <math>\left \{ \pi _{i} \right \}_{i=1}^{m}</math> as the dual variables associated with the joint constraints, and <math>\left \{ \mu_{k} \right \}_{k=1}^{K}</math> as dual variables for the second set of constraints.The latter are also known as convexity constraints.<br />
The idea is to solve the linear program by the primal simplex algorithm. However, the pricing step of choosing a column to enter the basis must be modified because of the very big number of columns in play. Instead of pricing the columns one at a time, the question of finding a column with the biggest reduced price is itself a set of <math>K</math> optimization problems.<br />
<br />
<br />
''Initialization:'' we suppose that a subset of columns (at least one for each <math>k</math>) is available, providing a feasible ''Restricted Linear Programming Master Problem'':<br />
<br />
(RLPM)<br />
<math>\begin{matrix}<br />
z^{LPM}=max\tilde{c}\tilde{\lambda} \\<br />
\tilde{A}\tilde{\lambda }=b \\<br />
\tilde{\lambda }\geq 0 <br />
\end{matrix}</math><br />
<br />
<br />
where <math>\tilde{b}=\begin{pmatrix}<br />
b\\ <br />
1\\ <br />
\end{pmatrix}</math>, <math>\tilde{A}</math> is generated by the available set of columns and <math>\tilde{c}\tilde{\lambda }</math> are the corresponding costs and variables. Solving the RLPM gives an optimal primal solution <math>\tilde{\lambda ^{*}}</math> and an optimal dual solution <math>\left ( \pi ,\mu \right )\epsilon\; R^{m}\times R^{k}</math><br />
<br />
<br />
''Primal feasibility:'' Any feasible solution of ''RLMP'' is feasible for ''LPM''. More precisely, <math>\tilde{\lambda^{*} }</math> is a feasible solution of ''LPM'', and hence <math>\tilde{z}^{LPM}=\tilde{c}\tilde{\lambda ^{*}}=\sum_{i=1}^{m}\pi _{i}b_{i}+\sum_{k=1}^{K}\mu _{k}\leq z^{LPM}</math> <br />
<br />
''Optimality check for LPM:'' It is required to check whether <math>\left ( \pi ,\mu \right )</math> is dual feasible for ''LPM''. This means checking for each column, that is for each <math>k</math>, and for each <math>x\; \epsilon \; X^{k}</math> if the reduced price <math>c^{k}x-\pi A^{k}x-\mu _{k}\leq 0</math>. Rather than examining each point separately, we treat all points in <math>X^{k}</math> implicitly, by solving an optimization subproblem:<br />
<br />
<math>\zeta _{k}=max\left \{ \left (c^{k}-\pi A^{k} \right )x-\mu _{k}\; :\; x\; \epsilon \; X^{k}\right \}.</math> <br />
<br />
<br />
''Stopping criteria:'' If <math>\zeta _{k}> 0</math> for <math>k=1,...,K</math> the solution <math>\left ( \pi ,\mu \right )</math> is dual feasible for ''LPM'', and hence <math>z^{LPM}\leq \sum_{i=1}^{m}\pi _{i}b_{i}+\sum_{k=1}^{K}\mu _{k}</math>. As the value of the primal feasible solution <math>\tilde{\lambda }</math> equals that of this upper bound, <math>\tilde{\lambda }</math> is optimal for ''LPM''. <br />
<br />
<br />
''Generating a new column:'' If <math>\zeta _{k}> 0</math> for some <math>k</math>, the column corresponding to the optimal solution <math>\tilde{x}^{k}</math> of the subproblem has a positive reduced price. Introducing the column <math>\begin{pmatrix}<br />
c^{k}x\\ <br />
A^{k}x\\ <br />
e_{k}<br />
\end{pmatrix}</math> leads then to a Restricted Linear Programming Master Problem that can be easily reoptimized (e.g., by the primal simplex algorithm)<br />
<br />
== Numerical example: The Cutting Stock problem<ref>L.A. Wolsey, Integer programming. Wiley,Column Generation Algorithms p185-p189,1998The Cutting Stock problem</ref> ==<br />
<br />
Suppose we want to solve a numerical example of the cutting stock problem, specifically a one-dimensional cutting stock problem. <br />
<br />
''<u>Problem Overview</u>''<br />
<br />
A company produces steel bars with diameter <math>45</math> millimeters and length <math>33</math> meters. The company also takes care of cutting the bars for their different customers, who each require different lengths. At the moment, the following demand forecast is expected and must be satisfied: <br />
{| class="wikitable"<br />
|+<br />
|Pieces needed<br />
|Piece length(m)<br />
|Type of item<br />
|-<br />
|144<br />
|6<br />
|1<br />
|-<br />
|105<br />
|13.5<br />
|2<br />
|-<br />
|72<br />
|15<br />
|3<br />
|-<br />
|30<br />
|16.5<br />
|4<br />
|-<br />
|24<br />
|22.5<br />
|5<br />
|}<br />
The objective is to establish what is the minimum number of steel bars that should be used to satisfy the total demand.<br />
<br />
A possible model for the problem, proposed by Gilmore and Gomory in the 1960ies is the one below:<br />
<br />
'''Sets'''<br />
<br />
<math>K=\left \{ 1,2,3,4,5 \right \}</math>: set of item types;<br />
<br />
''<math display="inline">S</math>:'' set of patterns (i.e., possible ways) that can be adopted to cut a given bar into portions of the need lengths.<br />
<br />
'''Parameters'''<br />
<br />
<math display="inline">M</math>: bar length (before the cutting process);<br />
<br />
<math display="inline">L_k</math>'':'' length of item ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'';<br />
<br />
<math display="inline">R_s</math> : number of pieces of type ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'' required;<br />
<br />
<math display="inline">N_{k,s}</math> : number of pieces of type ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'' in pattern ''<math display="inline">s</math>'' ''<math display="inline">\in</math> <math display="inline">S</math>''.<br />
<br />
'''Decision variables'''<br />
<br />
<math display="inline">Y_s</math> : number of bars that should be portioned using pattern ''<math display="inline">s</math>'' ''<math display="inline">\in</math> <math display="inline">S</math>''. <br />
<br />
'''Model''' <br />
<br />
<math>\begin{matrix}\min(y)\sum_{s=1}^Sy_s \\ \ s.t. \sum_kN_{ks}y_s\geq J_k \forall k\in K \\ y_s\in \Zeta_+\forall s\in S \end{matrix}<br />
<br />
</math><br />
<br />
''<u>Solving the problem</u>''<br />
<br />
The model assumes the availability of the set ''<math display="inline">K</math>'' and the parameters <math display="inline">N_{k,s}</math> . To generate this data, you would have to list all possible cutting patterns. However, the number of possible cutting patterns is a big number. This is why a direct implementation of the model above is not practical in real-world problems. In this case is when it makes sense to solve the continuous relaxation of the above model. This is because, in reality, the demand figures are so high that the number of bars to cut is also a large number, and therefore a good solution can be determined by rounding up to the next integer each variable <math>y_s<br />
<br />
</math>found by solving the continuous relaxation. In addition to that, the solution of the relaxed problem will become the starting point for the application of an exact solution method (for instance, the Branch-and Bound).<blockquote><u>''Key take-away: In the next steps of this example we will analyze how to solve the continuous relaxation of the model.''</u></blockquote>As a starting point, we need any feasible solution. Such a solution can be constructed as follows:<br />
<br />
# We consider any single-item cutting patterns, i.e., <math>\|K\|<br />
<br />
</math> configurations, each containing <math display="inline">{\textstyle N_{k,s} } = \llcorner \frac{W}{L_k}\lrcorner<br />
<br />
</math> pieces of type <math>k<br />
<br />
</math>;<br />
# Set <math display="inline">{\textstyle y_{k}} = \llcorner \frac{R_s}{N_{k,s}}\lrcorner<br />
<br />
</math> for pattern <math>k<br />
<br />
</math> (where pattern <math>k<br />
<br />
</math> is the pattern containing only pieces of type <math>k<br />
<br />
</math>).<br />
<br />
This solution could also be arrived to by applying the simplex method to the model (without integrality constraints), considering only the decision variables that correspond to the above single-item patterns: <br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y_{1}+y_{2}+y_{3}+y_{4}+y_{5}\\<br />
\text{s.t} & ~~ 15y_{1} \ge 144\\<br />
\ & ~~ 6y_{2} \ge 105\\<br />
\ & ~~ 6y_{3} \ge 72\\<br />
\ & ~~ 6y_{4} \ge 30\\<br />
\ & ~~ 3y_{5} \ge 24\\<br />
\ & ~~ y_{1},y_{2},y_{3},y_{4},y_{5} \ge 0\\<br />
\end{align}</math><br />
<br />
In fact, if we solve this problem (for example, use CPLEX solver in GAMS) the solution is as below: <br />
{| class="wikitable"<br />
|Y1<br />
|28.8<br />
|-<br />
|Y2<br />
|52.5<br />
|-<br />
|Y3<br />
|24<br />
|-<br />
|Y4<br />
|15<br />
|-<br />
|Y5<br />
|24<br />
|}<br />
Next, a new possible pattern (number <math>6</math>) will be considered. This pattern contains only one piece of item type number <math>5</math>. So the question is if the new solution would remain optimal if this new pattern was allowed. Duality helps answer ths question. At every iteration of the simplex method, the outcome is a feasible basic solution (corresponding to some basis <math>B</math>) for the primal problem and a dual solution (the multipliers <math>u^{t}=c^{t}BB^{-1}</math>) that satisfy the complementary slackness conditions. (Note: the dual solution will be feasible only when the last iteration is reached) <br />
<br />
The inclusion of new pattern <math>6</math> corresponds to including a new variable in the primal problem, with objective cost <math>1</math> (as each time pattern <math>6</math> is chosen, one bar is cut) and corresponding to the following column in the constraint matrix: <br />
<br />
<math>D_6= \begin{bmatrix}<br />
\ 1 \\ <br />
\ 0 \\ <br />
\ 0 \\ <br />
\ 0 \\ <br />
\ 1 \\ <br />
\end{bmatrix}</math><br />
<br />
<br />
These variables create a new dual constraint. We then have to check if this new constraint is violated by the current dual solution (or in other words, ''if the reduced cost of the new variable with respect to basis <math>B</math> is negative)''<br />
<br />
The new dual constraint is:<math>1\times u_{1}+0\times u_{2}+0\times u_{3}+0\times u_{4}+1\times u_{5}\leq 1</math><br />
<br />
The solution for the dual problem can be computed in different software packages, or by hand. The example below shows the solution obtained with GAMS for this example:<br />
<br />
(Note the solution for the dual problem would be: <math>u=c_{T}^{B}B^{-1}</math>)<br />
<br />
<br />
{| class="wikitable"<br />
|Dual variable<br />
|Variable value<br />
|-<br />
|D1<br />
|0.067<br />
|-<br />
|D2<br />
|0.167<br />
|-<br />
|D3<br />
|0.167<br />
|-<br />
|D4<br />
|0.167<br />
|-<br />
|D5<br />
|0.333<br />
|}<br />
Since <math>0.2+1=1.2> 1</math>, the new constraint is violated.<br />
<br />
This means that the current primal solution (in which the new variable is <math>y_{6}=0</math>) may not be optimal anymore (although it is still feasible). The fact that the dual constraint is violated means the associated primal variable has negative reduced cost: <br />
<br />
the norm of <math>c_6 = c_6-u^TD_6=1-0.4=0.6</math> <br />
<br />
To help us solve the problem, the next step is to let <math>y_{6}</math> enter the basis. To do so, we modify the problem by inserting the new variable as below:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y_{1}+y_{2}+y_{3}+y_{4}+y_{5}+y_{6}\\<br />
\text{s.t} & ~~ 15y_{1} +y_{6}\ge 144\\<br />
\ & ~~ 6y_{2} \ge 105\\<br />
\ & ~~ 6y_{3} \ge 72\\<br />
\ & ~~ 6y_{4} \ge 30\\<br />
\ & ~~ 3y_{5}+y_{6} \ge 24\\<br />
\ & ~~ y_{1},y_{2},y_{3},y_{4},y_{5},y_{6} \ge 0\\<br />
\end{align}</math><br />
<br />
<br />
If this problem is solved with the simplex method, the optimal solution is found, but restricted only to patterns <math>1</math> to <math>6</math>. If a new pattern is available, a decision should be made whether this new pattern should be used or not by proceeding as above. However, the problem is how to find a pattern (i.e., a variable; i.e, a column of the matrix) whose reduced cost is negative (i.e., which will mean it is convenient to include it in the formulation). At this point one can notice that number of possible patterns exponentially large,and all the patterns are not even known explicitly. The question then is:<br />
<br />
''Given a basic optimal solution for the problem in which only some variables are included, how can we find (if any exists) a variable with negative reduced cost (i.e., a constraint violated by the current dual solution)?'' <br />
<br />
This question can be transformed into an optimization problem: in order to see whether a variable with negative reduced cost exists, we can look for the minimum of the reduced costs of all possible variables and check whether this minimum is negative:<br />
<br />
<math>\bar{c}=1-u^Tz</math><br />
<br />
Because every column of the constraint matrix corresponds to a cutting pattern, and every entry of the column says how many pieces of a certain type are in that pattern. In order for <math>z<br />
<br />
</math> to be a possible column of the constraint matrix, the following condition must be satisfied:<br />
<br />
<math display="inline">\begin{matrix}z_k\in \Zeta_+\forall k\in K \\ \ \sum_kL_kz_k \leq M \end{matrix}<br />
<br />
</math><br />
<br />
And by so doing, it enables the conversion of the problem of finding a variable with negative reduced cost into the integer linear programming problem below:<br />
<br />
<math>\begin{matrix}\min\ \bar{c} = 1 - sum_{k=1}^K u_k \times z_k \\ \ s.t. \sum_kL_kz_k \leq M \\ z_k\in \Zeta_+\forall k\in K \end{matrix}<br />
<br />
</math><br />
<br />
which, in turn, would be equivalent to the below formulation (we just write the objective in maximization form and ignore the additive constant <math>1</math>):<br />
<br />
<math>\begin{matrix} \max\sum_{k=1}^K u_k \times z_k \\ \ s.t. \sum_kL_kz_k \leq M \\ z_k\in \Zeta_+\forall k\in K \end{matrix}</math><br />
<br />
<br />
<br />
The coefficients <math>z_k<br />
<br />
</math> of a column with negative reduced cost can be found by solving the above integer [[wikipedia:Knapsack_problem|"knapsack"]] problem (which is a traditional type of problem that we find in integer programming).<br />
<br />
In our example, if we start from the problem restricted to the five single-item patterns, the above problem reads as:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ 0.067z_{1}+0.167z_{2}+0.167z_{3}+0.167z_{4}+z_{5}\\<br />
\text{s.t} & ~~ 6z_{1} +13.5z_{2}+15z_{3}+16.5z_{4}+22.5z_{5}\le 33\\<br />
\ & ~~ z_{1},z_{2},z_{3},z_{4},z_{5}\ge 0\\<br />
\end{align}</math><br />
<br />
<br />
which has the following optimal solution: <math>z^T= [1 \quad 0\quad 0\quad 0\quad 1]</math><br />
<br />
This matches the pattern we called <math>D6</math>, earlier on in this page.<br />
<br />
<br />
<u>Optimality test</u><br />
<br />
If : <math display="inline">\sum_{k=1}^{K}z_{k}^{*}u_{k}^{*}\leq 1</math><br />
<br />
then <math>y^*</math> is an optimal solution of the full continuous relaxed problem (that is, including all patterns in ''<math display="inline">S</math>'')<br />
<br />
If this condition is not true, we go ahead and update the master problem by including in ''<math display="inline">S^'</math>'' the pattern <math>\lambda</math> defined by <math>N_{s,\lambda}</math> (in practical terms this means that the column '''<math>y^*</math>''' needs to be included in the constraint matrix)<br />
<br />
For this example we find that the optimality test is met as <math>\sum_{k=1}^{K}z_{k}^{*}u_{k}^{*}=0.4 \leq 1</math> so we have have found an optimal solution of the relaxed continuous problem (if this was not the case we would have had to go back to reformulating and solving the master problem, as discussed in the methodology section of this page) <br />
<br />
<br />
<br />
<br />
'''''Algorithm discussion'''''<br />
<br />
The column generation subproblem is the critical part of the method is generating the new columns. It is not reasonable to compute the reduced costs of all variables <math>y_s<br />
<br />
</math> for <math>s=1,...,S</math>, otherwise this procedure would reduce to the simplex method. In fact, n<math>n</math> can be very large (as in the cutting-stock problem) or, for some reason, it might not be possible or convenient to enumerate all decision variables. This is when it would be necessary to study a specific column generation algorithm for each problem; ''only if such an algorithm exists (and is practical)'', the method can be fully applied. In the one-dimensional cutting stock problem, we transformed the column generation subproblem into an easily solvable integer linear programming problem. In other cases, the computational effort required to solve the subproblem is too high, such that appying this full procedure becomes unefficient.<br />
<br />
== Applications ==<br />
As previously mentioned, column generation techniques are most relevant when the problem that we are trying to solve has a high ratio of number of variables with respect to the number of constraints. As such some common applications are:<br />
<br />
* Bandwith packing<br />
* Bus driver scheduling<br />
* Generally, column generation algorithms are used for large delivery networks, often in combination with other methods, helping to implement real-time solutions for on-demand logistics. We discuss a supply chain scheduling application below. <br />
<br />
'''''Bandwidth packing''''' <br />
<br />
The objective of this problem is to allocate bandwidth in a telecommunications network to maximize total revenue. The routing of a set of traffic demands between different users is to be decided, taking into account the capacity of the network arcs and the fact that the traffic between each pair of users cannot be split The problem can be formulated as an integer programming problem and the linear programming relaxation solved using column generation and the simplex algorithm. A branch and bound procedure which branches upon a particular path is used in this particular paper<ref name=":3">Parker, Mark & Ryan, Jennifer. (1993). A column generation algorithm for bandwidth packing. Telecommunication Systems. 2. 185-195. 10.1007/BF02109857. </ref> that looks into bandwidth routing, to solve the IP. The column generation algorithm greatly reduces the complexity of this problem. <br />
<br />
'''''Bus driver scheduling'''''<br />
<br />
Bus driver scheduling aims to find the minimum number of bus drivers to cover a published timetable of a bus company. When scheduling bus drivers, contractual working rules must be enforced, thus complicating the problem. A column generation algorithm can decompose this complicated problem into a master problem and a series of pricing subproblems. The master problem would select optimal duties from a set of known feasible duties, and the pricing subproblem would augment the feasible duty set to improve the solution obtained in the master problem.<ref name=":2">Dung‐Ying Lin, Ching‐Lan Hsu. Journal of Advanced Transportation. Volume50, Issue8, December 2016, Pages 1598-1615. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/atr.1417</ref><br />
<br />
'''''Supply Chain scheduling problem'''''<br />
<br />
A typical application is where we consider the problem of scheduling a set of shipments between different nodes of a supply chain network. Each shipment has a fixed departure time, as well as an origin and a destination node, which, combined, determine the duration of the associated trip. The aim is to schedule as many shipments as possible, while also minimizing the number of vehicles utilized for this purpose. This problem can be formulated by an integer programming model and an associated branch and price solution algorithm. The optimal solution to the LP relaxation of the problem can be obtained through column generation, solving the linear program a huge number of variables, without explicitly considering all of them. In the context of this application, the master problem schedules the maximum possible number of shipments using only a small set of vehicle-routes, and a column generation (colgen) sub-problem would generate cost-effective vehicle-routes to be fed fed into the master problem. After finding the optimal solution to the LP relaxation of the problem, the algorithm would branch on the fractional decision variables (vehicle-routes), in order to reach the optimal integer solution.<ref name=":1">Kozanidis, George. (2014). Column generation for scheduling shipments within a supply chain network with the minimum number of vehicles. OPT-i 2014 - 1st International Conference on Engineering and Applied Sciences Optimization, Proceedings. 888-898</ref><br />
<br />
== Conclusions ==<br />
Column generation is a way of starting with a small, manageable part of a problem (specifically, with some of the variables), solving that part, analyzing that interim solution to find the next part of the problem (specifically, one or more variables) to add to the model, and then solving the full or extended model. In the column generation method, the algorithm steps are repeated until an optimal solution to the entire problem is achieved.<ref> ILOG CPLEX 11.0 User's Manual > Discrete Optimization > Using Column Generation: a Cutting Stock Example > What Is Column Generation? 1997-2007. URL:http://www-eio.upc.es/lceio/manuals/cplex-11/html/usrcplex/usingColumnGen2.html#:~:text=In%20formal%20terms%2C%20column%20generation,method%20of%20solving%20the%20problem.&text=By%201960%2C%20Dantzig%20and%20Wolfe,problems%20with%20a%20decomposable%20structure</ref><br />
<br />
This algorithm provides a way of solving a linear programming problem adding columns (corresponding to constrained variables) during the pricing phase of the problem solving phase, that would otherwise be very tedious to formulate and compute. Generating a column in the primal formulation of a linear programming problem corresponds to adding a constraint in its dual formulation.<br />
<br />
== References ==</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Column_generation_algorithms&diff=2707Column generation algorithms2020-12-21T10:45:10Z<p>Wc593: </p>
<hr />
<div>Author: Lorena Garcia Fernandez (lgf572)<br />
<br />
== Introduction ==<br />
Column Generation techniques have the scope of solving large linear optimization problems by generating only the variables that will have an influence on the objective function. This is important for big problems with many variables where the formulation with these techniques would simplify the problem formulation, since not all the possibilities need to be explicitly listed.<ref>Desrosiers, Jacques & Lübbecke, Marco. (2006). A Primer in Column Generation.p7-p14 10.1007/0-387-25486-2_1. </ref><br />
<br />
== Theory, methodology and algorithmic discussions ==<br />
'''''Theory'''''<br />
<br />
The way this method work is as follows; first, the original problem that is being solved needs to be split into two problems: the master problem and the sub-problem.<br />
<br />
* The master problem is the original column-wise (i.e: one column at a time) formulation of the problem with only a subset of variables being considered.<ref><br />
AlainChabrier, Column Generation techniques, 2019 URL: https://medium.com/@AlainChabrier/column-generation-techniques-6a414d723a64<br />
</ref><br />
<br />
* The sub-problem is a new problem created to identify a new promising variable. The objective function of the sub-problem is the reduced cost of the new variable with respect to the current dual variables, and the constraints require that the variable obeys the naturally occurring constraints. The subproblem is also referred to as the RMP or “restricted master problem”. From this we can infer that this method will be a good fit for problems whose constraint set admit a natural breakdown (i.e: decomposition) into sub-systems representing a well understood combinatorial structure.<ref><br />
AlainChabrier, Column Generation techniques, 2019 URL: https://medium.com/@AlainChabrier/column-generation-techniques-6a414d723a64<br />
</ref><br />
<br />
To execute that decomposition from the original problem into Master and subproblems there are different techniques. The theory behind this method relies on the Dantzig-Wolfe decomposition.<ref>Dantzig-Wolfe decomposition. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dantzig-Wolfe_decomposition&oldid=50750</ref><br />
<br />
In summary, when the master problem is solved, we are able to obtain dual prices for each of the constraints in the master problem. This information is then utilized in the objective function of the subproblem. The subproblem is solved. If the objective value of the subproblem is negative, a variable with negative reduced cost has been identified. This variable is then added to the master problem, and the master problem is re-solved. Re-solving the master problem will generate a new set of dual values, and the process is repeated until no negative reduced cost variables are identified. The subproblem returns a solution with non-negative reduced cost, we can conclude that the solution to the master problem is optimal.<ref>Wikipedia, the free encyclopeda. Column Generation. URL: https://en.wikipedia.org/wiki/Column_generation</ref><br />
<br />
'''''Methodology'''''<ref>L.A. Wolsey, Integer programming. Wiley,Column Generation Algorithms p185-p189,1998</ref><br />
[[File:Column Generation.png|thumb|468x468px|Column generation schematics<ref name=":4">GERARD. (2005). Personnel and Vehicle scheduling, Column Generation, slide 12. URL: https://slideplayer.com/slide/6574/</ref>]]<br />
Consider the problem in the form:<br />
<br />
(IP) <br />
<math>z=max\left \{\sum_{k=1}^{K}c^{k}x^{k}:\sum_{k=1}^{K}A^{k}x^{k}=b,x^{k}\epsilon X^{k}\; \; \; for\; \; \; k=1,...,K \right \}</math><br />
<br />
<br />
Where <math>X^{k}=\left \{x^{k}\epsilon Z_{+}^{n_{k}}: D^{k}x^{k}\leq d^{_{k}} \right \}</math> for <math>k=1,...,K</math>. Assuming that each set <math>X^{k}</math> contains a large but finite set of points <math>\left \{ x^{k,t} \right \}_{t=1}^{T_{k}}</math>, we have that <math>X^{k}=</math>:<br />
<br />
<math>\left \{ x^{k}\epsilon R^{n_{k}}:x^{k}=\sum_{t=1}^{T_{k}}\lambda _{k,t}x^{k,t},\sum_{t=1}^{T_{k}}\lambda _{k,t}=1,\lambda _{k,t}\epsilon \left \{ 0,1 \right \}for \; \; k=1,...,K \right \}</math><br />
<br />
Note that, on the assumption that each of the sets <math>X^{k}=</math> is bounded for <math>k=1,...,K</math> the approach will involve solving an equivalent problem of the form as below:<br />
<br />
<math>max\left \{ \sum_{k=1}^{K}\gamma ^{k}\lambda ^{k}: \sum_{k=1}^{K}B^{k}\lambda ^{k}=\beta ,\lambda ^{k}\geq 0\; \; integer\; \; for\; \; k=1,...,K \right \}</math><br />
<br />
where each matrix <math>B^{k}</math> has a very large number of columns, one for each of the feasible points in <math>X^{k}</math>, and each vector <math>\lambda ^{k}</math> contains the corresponding variables.<br />
<br />
<br />
Now, substituting for <math>x^{k}=</math> leads to an equivalent ''IP Master Problem (IPM)'':<br />
<br />
(IPM)<br />
<math>\begin{matrix}<br />
z=max\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left(c^{k}x^{k,t}\right )\lambda _{k,t} \\ \sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( A^{k}x^{k,t} \right )\lambda _{k,t}=b\\<br />
\sum_{t=1}^{T_{k}}\lambda _{k,t}=1\; \; for\; \; k=1,...,K \\<br />
\lambda _{k,t}\epsilon \left \{ 0,1 \right \}\; \; for\; \; t=1,...,T_{k}\; \; and\; \; k=1,...,K.<br />
\end{matrix}</math><br />
<br />
To solve the Master Linear Program, we use a column generation algorithm. This is in order to solve the linear programming relaxation of the Integer Programming Master Problem, called the ''Linear Programming Master Problem (LPM)'':<br />
<br />
(LPM)<br />
<math>\begin{matrix}<br />
z^{LPM}=max\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( c^{k}x^{k,t} \right )\lambda _{k,t}\\<br />
\sum_{k=1}^{K}\sum_{t=1}^{T_{k}}\left ( A^{k}x^{k,t} \right )\lambda _{k,t}=b \\<br />
\sum_{t=1}^{T_{k}}\lambda _{k,t}=1\; \;for\; \; k=1,...,K \\<br />
\lambda _{k,t} \geq 0\; \; for\; \; t=1,...,T_{k},\; k=1,...,K<br />
\end{matrix}</math><br />
<br />
Where there is a column <math>\begin{pmatrix}<br />
c^{k}x\\ <br />
A^{k}x\\ <br />
e_{k}<br />
\end{pmatrix}</math> for each ''<math>x</math>'' ''<math display="inline">\in</math> <math display="inline">X^{k}</math>''. On the next steps of this method, we will use <math>\left \{ \pi _{i} \right \}_{i=1}^{m}</math> as the dual variables associated with the joint constraints, and <math>\left \{ \mu_{k} \right \}_{k=1}^{K}</math> as dual variables for the second set of constraints.The latter are also known as convexity constraints.<br />
The idea is to solve the linear program by the primal simplex algorithm. However, the pricing step of choosing a column to enter the basis must be modified because of the very big number of columns in play. Instead of pricing the columns one at a time, the question of finding a column with the biggest reduced price is itself a set of <math>K</math> optimization problems.<br />
<br />
<br />
''Initialization:'' we suppose that a subset of columns (at least one for each <math>k</math>) is available, providing a feasible ''Restricted Linear Programming Master Problem'':<br />
<br />
(RLPM)<br />
<math>\begin{matrix}<br />
z^{LPM}=max\tilde{c}\tilde{\lambda} \\<br />
\tilde{A}\tilde{\lambda }=b \\<br />
\tilde{\lambda }\geq 0 <br />
\end{matrix}</math><br />
<br />
<br />
where <math>\tilde{b}=\begin{pmatrix}<br />
b\\ <br />
1\\ <br />
\end{pmatrix}</math>, <math>\tilde{A}</math> is generated by the available set of columns and <math>\tilde{c}\tilde{\lambda }</math> are the corresponding costs and variables. Solving the RLPM gives an optimal primal solution <math>\tilde{\lambda ^{*}}</math> and an optimal dual solution <math>\left ( \pi ,\mu \right )\epsilon\; R^{m}\times R^{k}</math><br />
<br />
<br />
''Primal feasibility:'' Any feasible solution of ''RLMP'' is feasible for ''LPM''. More precisely, <math>\tilde{\lambda^{*} }</math> is a feasible solution of ''LPM'', and hence <math>\tilde{z}^{LPM}=\tilde{c}\tilde{\lambda ^{*}}=\sum_{i=1}^{m}\pi _{i}b_{i}+\sum_{k=1}^{K}\mu _{k}\leq z^{LPM}</math> <br />
<br />
''Optimality check for LPM:'' It is required to check whether <math>\left ( \pi ,\mu \right )</math> is dual feasible for ''LPM''. This means checking for each column, that is for each <math>k</math>, and for each <math>x\; \epsilon \; X^{k}</math> if the reduced price <math>c^{k}x-\pi A^{k}x-\mu _{k}\leq 0</math>. Rather than examining each point separately, we treat all points in <math>X^{k}</math> implicitly, by solving an optimization subproblem:<br />
<br />
<math>\zeta _{k}=max\left \{ \left (c^{k}-\pi A^{k} \right )x-\mu _{k}\; :\; x\; \epsilon \; X^{k}\right \}.</math> <br />
<br />
<br />
''Stopping criteria:'' If <math>\zeta _{k}> 0</math> for <math>k=1,...,K</math> the solution <math>\left ( \pi ,\mu \right )</math> is dual feasible for ''LPM'', and hence <math>z^{LPM}\leq \sum_{i=1}^{m}\pi _{i}b_{i}+\sum_{k=1}^{K}\mu _{k}</math>. As the value of the primal feasible solution <math>\tilde{\lambda }</math> equals that of this upper bound, <math>\tilde{\lambda }</math> is optimal for ''LPM''. <br />
<br />
<br />
''Generating a new column:'' If <math>\zeta _{k}> 0</math> for some <math>k</math>, the column corresponding to the optimal solution <math>\tilde{x}^{k}</math> of the subproblem has a positive reduced price. Introducing the column <math>\begin{pmatrix}<br />
c^{k}x\\ <br />
A^{k}x\\ <br />
e_{k}<br />
\end{pmatrix}</math> leads then to a Restricted Linear Programming Master Problem that can be easily reoptimized (e.g., by the primal simplex algorithm)<br />
<br />
== Numerical example: The Cutting Stock problem<ref>L.A. Wolsey, Integer programming. Wiley,Column Generation Algorithms p185-p189,1998The Cutting Stock problem</ref> ==<br />
<br />
Suppose we want to solve a numerical example of the cutting stock problem, specifically a one-dimensional cutting stock problem. <br />
<br />
''<u>Problem Overview</u>''<br />
<br />
A company produces steel bars with diameter <math>45</math> millimeters and length <math>33</math> meters. The company also takes care of cutting the bars for their different customers, who each require different lengths. At the moment, the following demand forecast is expected and must be satisfied: <br />
{| class="wikitable"<br />
|+<br />
|Pieces needed<br />
|Piece length(m)<br />
|Type of item<br />
|-<br />
|144<br />
|6<br />
|1<br />
|-<br />
|105<br />
|13.5<br />
|2<br />
|-<br />
|72<br />
|15<br />
|3<br />
|-<br />
|30<br />
|16.5<br />
|4<br />
|-<br />
|24<br />
|22.5<br />
|5<br />
|}<br />
The objective is to establish what is the minimum number of steel bars that should be used to satisfy the total demand.<br />
<br />
A possible model for the problem, proposed by Gilmore and Gomory in the 1960ies is the one below:<br />
<br />
'''Sets'''<br />
<br />
<math>K=\left \{ 1,2,3,4,5 \right \}</math>: set of item types;<br />
<br />
''<math display="inline">S</math>:'' set of patterns (i.e., possible ways) that can be adopted to cut a given bar into portions of the need lengths.<br />
<br />
'''Parameters'''<br />
<br />
<math display="inline">M</math>: bar length (before the cutting process);<br />
<br />
<math display="inline">L_k</math>'':'' length of item ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'';<br />
<br />
<math display="inline">R_s</math> : number of pieces of type ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'' required;<br />
<br />
<math display="inline">N_{k,s}</math> : number of pieces of type ''<math display="inline">k</math>'' ''<math display="inline">\in</math> <math display="inline">K</math>'' in pattern ''<math display="inline">s</math>'' ''<math display="inline">\in</math> <math display="inline">S</math>''.<br />
<br />
'''Decision variables'''<br />
<br />
<math display="inline">Y_s</math> : number of bars that should be portioned using pattern ''<math display="inline">s</math>'' ''<math display="inline">\in</math> <math display="inline">S</math>''. <br />
<br />
'''Model''' <br />
<br />
<math>\begin{matrix}\min(y)\sum_{s=1}^Sy_s \\ \ s.t. \sum_kN_{ks}y_s\geq J_k \forall k\in K \\ y_s\in \Zeta_+\forall s\in S \end{matrix}<br />
<br />
</math><br />
<br />
''<u>Solving the problem</u>''<br />
<br />
The model assumes the availability of the set ''<math display="inline">K</math>'' and the parameters <math display="inline">N_{k,s}</math> . To generate this data, you would have to list all possible cutting patterns. However, the number of possible cutting patterns is a big number. This is why a direct implementation of the model above is not partical in real-world problems. In this case is when it makes sense to solve the continuous relaxation of the above model. This is because, in reality, the demand figures are so high that the number of bars to cut is also a large number, and therefore a good solution can be determined by rounding up to the next integer each variable <math>y_s<br />
<br />
</math>found by solving the continuous relaxation. In addition to that, the solution of the relaxed problem will become the starting point for the application of an exact solution method (for instance, the Branch-and Bound).<blockquote><u>''Key take-away: In the next steps of this example we will analyze how to solve the continuous relaxation of the model.''</u></blockquote>As a starting point, we need any feasible solution. Such a solution can be constructed as follows:<br />
<br />
# We consider any single-item cutting patterns, i.e., <math>\|K\|<br />
<br />
</math> configurations, each containing <math display="inline">{\textstyle N_{k,s} } = \llcorner \frac{W}{L_k}\lrcorner<br />
<br />
</math> pieces of type <math>k<br />
<br />
</math>;<br />
# Set <math display="inline">{\textstyle y_{k}} = \llcorner \frac{R_s}{N_{k,s}}\lrcorner<br />
<br />
</math> for pattern <math>k<br />
<br />
</math> (where pattern <math>k<br />
<br />
</math> is the pattern containing only pieces of type <math>k<br />
<br />
</math>).<br />
<br />
This solution could also be arrived to by applying the simplex method to the model (without integrality constraints), considering only the decision variables that correspond to the above single-item patterns: <br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y_{1}+y_{2}+y_{3}+y_{4}+y_{5}\\<br />
\text{s.t} & ~~ 15y_{1} \ge 144\\<br />
\ & ~~ 6y_{2} \ge 105\\<br />
\ & ~~ 6y_{3} \ge 72\\<br />
\ & ~~ 6y_{4} \ge 30\\<br />
\ & ~~ 3y_{5} \ge 24\\<br />
\ & ~~ y_{1},y_{2},y_{3},y_{4},y_{5} \ge 0\\<br />
\end{align}</math><br />
<br />
In fact, if we solve this problem (for example, use CPLEX solver in GAMS) the solution is as below: <br />
{| class="wikitable"<br />
|Y1<br />
|28.8<br />
|-<br />
|Y2<br />
|52.5<br />
|-<br />
|Y3<br />
|24<br />
|-<br />
|Y4<br />
|15<br />
|-<br />
|Y5<br />
|24<br />
|}<br />
Next, a new possible pattern (number <math>6</math>) will be considered. This pattern contains only one piece of item type number <math>5</math>. So the question is if the new solution would remain optimal if this new pattern was allowed. Duality helps answer ths question. At every iteration of the simplex method, the outcome is a feasible basic solution (corresponding to some basis <math>B</math>) for the primal problem and a dual solution (the multipliers <math>u^{t}=c^{t}BB^{-1}</math>) that satisfy the complementary slackness conditions. (Note: the dual solution will be feasible only when the last iteration is reached) <br />
<br />
The inclusion of new pattern <math>6</math> corresponds to including a new variable in the primal problem, with objective cost <math>1</math> (as each time pattern <math>6</math> is chosen, one bar is cut) and corresponding to the following column in the constraint matrix: <br />
<br />
<math>D_6= \begin{bmatrix}<br />
\ 1 \\ <br />
\ 0 \\ <br />
\ 0 \\ <br />
\ 0 \\ <br />
\ 1 \\ <br />
\end{bmatrix}</math><br />
<br />
<br />
These variables create a new dual constraint. We then have to check if this new constraint is violated by the current dual solution (or in other words, ''if the reduced cost of the new variable with respect to basis <math>B</math> is negative)''<br />
<br />
The new dual constraint is:<math>1\times u_{1}+0\times u_{2}+0\times u_{3}+0\times u_{4}+1\times u_{5}\leq 1</math><br />
<br />
The solution for the dual problem can be computed in different software packages, or by hand. The example below shows the solution obtained with GAMS for this example:<br />
<br />
(Note the solution for the dual problem would be: <math>u=c_{T}^{B}B^{-1}</math>)<br />
<br />
<br />
{| class="wikitable"<br />
|Dual variable<br />
|Variable value<br />
|-<br />
|D1<br />
|0.067<br />
|-<br />
|D2<br />
|0.167<br />
|-<br />
|D3<br />
|0.167<br />
|-<br />
|D4<br />
|0.167<br />
|-<br />
|D5<br />
|0.333<br />
|}<br />
Since <math>0.2+1=1.2> 1</math>, the new constraint is violated.<br />
<br />
This means that the current primal solution (in which the new variable is <math>y_{6}=0</math>) may not be optimal anymore (although it is still feasible). The fact that the dual constraint is violated means the associated primal variable has negative reduced cost: <br />
<br />
the norm of <math>c_6 = c_6-u^TD_6=1-0.4=0.6</math> <br />
<br />
To help us solve the problem, the next step is to let <math>y_{6}</math> enter the basis. To do so, we modify the problem by inserting the new variable as below:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ y_{1}+y_{2}+y_{3}+y_{4}+y_{5}+y_{6}\\<br />
\text{s.t} & ~~ 15y_{1} +y_{6}\ge 144\\<br />
\ & ~~ 6y_{2} \ge 105\\<br />
\ & ~~ 6y_{3} \ge 72\\<br />
\ & ~~ 6y_{4} \ge 30\\<br />
\ & ~~ 3y_{5}+y_{6} \ge 24\\<br />
\ & ~~ y_{1},y_{2},y_{3},y_{4},y_{5},y_{6} \ge 0\\<br />
\end{align}</math><br />
<br />
<br />
If this problem is solved with the simplex method, the optimal solution is found, but restricted only to patterns <math>1</math> to <math>6</math>. If a new pattern is available, a decision should be made whether this new pattern should be used or not by proceeding as above. However, the problem is how to find a pattern (i.e., a variable; i.e, a column of the matrix) whose reduced cost is negative (i.e., which will mean it is convenient to include it in the formulation). At this point one can notice that number of possible patterns exponentially large,and all the patterns are not even known explicitly. The question then is:<br />
<br />
''Given a basic optimal solution for the problem in which only some variables are included, how can we find (if any exists) a variable with negative reduced cost (i.e., a constraint violated by the current dual solution)?'' <br />
<br />
This question can be transformed into an optimization problem: in order to see whether a variable with negative reduced cost exists, we can look for the minimum of the reduced costs of all possible variables and check whether this minimum is negative:<br />
<br />
<math>\bar{c}=1-u^Tz</math><br />
<br />
Because every column of the constraint matrix corresponds to a cutting pattern, and every entry of the column says how many pieces of a certain type are in that pattern. In order for <math>z<br />
<br />
</math> to be a possible column of the constraint matrix, the following condition must be satisfied:<br />
<br />
<math display="inline">\begin{matrix}z_k\in \Zeta_+\forall k\in K \\ \ \sum_kL_kz_k \leq M \end{matrix}<br />
<br />
</math><br />
<br />
And by so doing, it enables the conversion of the problem of finding a variable with negative reduced cost into the integer linear programming problem below:<br />
<br />
<math>\begin{matrix}\min\ \bar{c} = 1 - sum_{k=1}^K u_k \times z_k \\ \ s.t. \sum_kL_kz_k \leq M \\ z_k\in \Zeta_+\forall k\in K \end{matrix}<br />
<br />
</math><br />
<br />
which, in turn, would be equivalent to the below formulation (we just write the objective in maximization form and ignore the additive constant <math>1</math>):<br />
<br />
<math>\begin{matrix} \max\sum_{k=1}^K u_k \times z_k \\ \ s.t. \sum_kL_kz_k \leq M \\ z_k\in \Zeta_+\forall k\in K \end{matrix}</math><br />
<br />
<br />
<br />
The coefficients <math>z_k<br />
<br />
</math> of a column with negative reduced cost can be found by solving the above integer [[wikipedia:Knapsack_problem|"knapsack"]] problem (which is a traditional type of problem that we find in integer programming).<br />
<br />
In our example, if we start from the problem restricted to the five single-item patterns, the above problem reads as:<br />
<br />
<math>\begin{align}<br />
\text{min} & ~~ 0.067z_{1}+0.167z_{2}+0.167z_{3}+0.167z_{4}+z_{5}\\<br />
\text{s.t} & ~~ 6z_{1} +13.5z_{2}+15z_{3}+16.5z_{4}+22.5z_{5}\le 33\\<br />
\ & ~~ z_{1},z_{2},z_{3},z_{4},z_{5}\ge 0\\<br />
\end{align}</math><br />
<br />
<br />
which has the following optimal solution: <math>z^T= [1 \quad 0\quad 0\quad 0\quad 1]</math><br />
<br />
This matches the pattern we called <math>D6</math>, earlier on in this page.<br />
<br />
<br />
<u>Optimality test</u><br />
<br />
If : <math display="inline">\sum_{k=1}^{K}z_{k}^{*}u_{k}^{*}\leq 1</math><br />
<br />
then <math>y^*</math> is an optimal solution of the full continuous relaxed problem (that is, including all patterns in ''<math display="inline">S</math>'')<br />
<br />
If this condition is not true, we go ahead and update the master problem by including in ''<math display="inline">S^'</math>'' the pattern <math>\lambda</math> defined by <math>N_{s,\lambda}</math> (in practical terms this means that the column '''<math>y^*</math>''' needs to be included in the constraint matrix)<br />
<br />
For this example we find that the optimality test is met as <math>\sum_{k=1}^{K}z_{k}^{*}u_{k}^{*}=0.4 \leq 1</math> so we have have found an optimal solution of the relaxed continuous problem (if this was not the case we would have had to go back to reformulating and solving the master problem, as discussed in the methodology section of this page) <br />
<br />
<br />
<br />
<br />
'''''Algorithm discussion'''''<br />
<br />
The column generation subproblem is the critical part of the method is generating the new columns. It is not reasonable to compute the reduced costs of all variables <math>y_s<br />
<br />
</math> for <math>s=1,...,S</math>, otherwise this procedure would reduce to the simplex method. In fact, n<math>n</math> can be very large (as in the cutting-stock problem) or, for some reason, it might not be possible or convenient to enumerate all decision variables. This is when it would be necessary to study a specific column generation algorithm for each problem; ''only if such an algorithm exists (and is partical)'', the method can be fully applied. In the one-dimensional cutting stock problem, we transformed the column generation subproblem into an easily solvable integer linear programming problem. In other cases, the computational effort required to solve the subproblem is too high, such that appying this full procedure becomes unefficient.<br />
<br />
== Applications ==<br />
As previously mentioned, column generation techniques are most relevant when the problem that we are trying to solve has a high ratio of number of variables with respect to the number of constraints. As such some common applications are:<br />
<br />
* Bandwith packing<br />
* Bus driver scheduling<br />
* Generally, column generation algorithms are used for large delivery networks, often in combination with other methods, helping to implement real-time solutions for on-demand logistics. We discuss a supply chain scheduling application below. <br />
<br />
'''''Bandwidth packing''''' <br />
<br />
The objective of this problem is to allocate bandwidth in a telecommunications network to maximize total revenue. The routing of a set of traffic demands between different users is to be decided, taking into account the capacity of the network arcs and the fact that the traffic between each pair of users cannot be split The problem can be formulated as an integer programming problem and the linear programming relaxation solved using column generation and the simplex algorithm. A branch and bound procedure which branches upon a particular path is used in this particular paper<ref name=":3">Parker, Mark & Ryan, Jennifer. (1993). A column generation algorithm for bandwidth packing. Telecommunication Systems. 2. 185-195. 10.1007/BF02109857. </ref> that looks into bandwidth routing, to solve the IP. The column generation algorithm greatly reduces the complexity of this problem. <br />
<br />
'''''Bus driver scheduling'''''<br />
<br />
Bus driver scheduling aims to find the minimum number of bus drivers to cover a published timetable of a bus company. When scheduling bus drivers, contractual working rules must be enforced, thus complicating the problem. A column generation algorithm can decompose this complicated problem into a master problem and a series of pricing subproblems. The master problem would select optimal duties from a set of known feasible duties, and the pricing subproblem would augment the feasible duty set to improve the solution obtained in the master problem.<ref name=":2">Dung‐Ying Lin, Ching‐Lan Hsu. Journal of Advanced Transportation. Volume50, Issue8, December 2016, Pages 1598-1615. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/atr.1417</ref><br />
<br />
'''''Supply Chain scheduling problem'''''<br />
<br />
A typical application is where we consider the problem of scheduling a set of shipments between different nodes of a supply chain network. Each shipment has a fixed departure time, as well as an origin and a destination node, which, combined, determine the duration of the associated trip. The aim is to schedule as many shipments as possible, while also minimizing the number of vehicles utilized for this purpose. This problem can be formulated by an integer programming model and an associated branch and price solution algorithm. The optimal solution to the LP relaxation of the problem can be obtained through column generation, solving the linear program a huge number of variables, without explicitly considering all of them. In the context of this application, the master problem schedules the maximum possible number of shipments using only a small set of vehicle-routes, and a column generation (colgen) sub-problem would generate cost-effective vehicle-routes to be fed fed into the master problem. After finding the optimal solution to the LP relaxation of the problem, the algorithm would branch on the fractional decision variables (vehicle-routes), in order to reach the optimal integer solution.<ref name=":1">Kozanidis, George. (2014). Column generation for scheduling shipments within a supply chain network with the minimum number of vehicles. OPT-i 2014 - 1st International Conference on Engineering and Applied Sciences Optimization, Proceedings. 888-898</ref><br />
<br />
== Conclusions ==<br />
Column generation is a way of starting with a small, manageable part of a problem (specifically, with some of the variables), solving that part, analyzing that interim solution to find the next part of the problem (specifically, one or more variables) to add to the model, and then solving the full or extended model. In the column generation method, the algorithm steps are repeated until an optimal solution to the entire problem is achieved.<ref> ILOG CPLEX 11.0 User's Manual > Discrete Optimization > Using Column Generation: a Cutting Stock Example > What Is Column Generation? 1997-2007. URL:http://www-eio.upc.es/lceio/manuals/cplex-11/html/usrcplex/usingColumnGen2.html#:~:text=In%20formal%20terms%2C%20column%20generation,method%20of%20solving%20the%20problem.&text=By%201960%2C%20Dantzig%20and%20Wolfe,problems%20with%20a%20decomposable%20structure</ref><br />
<br />
This algorithm provides a way of solving a linear programming problem adding columns (corresponding to constrained variables) during the pricing phase of the problem solving phase, that would otherwise be very tedious to formulate and compute. Generating a column in the primal formulation of a linear programming problem corresponds to adding a constraint in its dual formulation.<br />
<br />
== References ==</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27062020 Cornell Optimization Open Textbook Feedback2020-12-21T10:39:35Z<p>Wc593: /* Set covering problem */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Column generation algorithms]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory, methodology and algorithmic discussions<br />
*# Some minor typos/article agreement issues exist “is not partical in real-world”.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Please use proper symbol for "greater than or equal to".<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Set_covering_problem&diff=2705Set covering problem2020-12-21T10:38:51Z<p>Wc593: /* Approximation via LP relaxation and rounding */</p>
<hr />
<div>Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud<br />
<br><br />
Steward: Allen Yang, Fengqi You<br />
<br />
== Introduction ==<br />
<br />
The set covering problem is a significant NP-hard problem in combinatorial optimization. Given a collection of elements, the set covering problem aims to find the minimum number of sets that incorporate (cover) all of these elements. <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref><br />
<br />
The set covering problem importance has two main aspects: one is pedagogical, and the other is practical. <br />
<br />
First, because many greedy approximation methods have been proposed for this combinatorial problem, studying it gives insight into the use of approximation algorithms in solving NP-hard problems. Thus, it is a primal example in teaching computational algorithms. We present a preview of these methods in a later section, and we refer the interested reader to these references for a deeper discussion. <ref name="one" /> <ref name="seven"> P. Slavı́k, [https://www.sciencedirect.com/science/article/abs/pii/S0196677497908877 "A Tight Analysis of the Greedy Algorithm for Set Cover]," ''Journal of Algorithms,'', vol. 25, pp. 237-245, 1997. </ref> <ref name="nine"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "What Is the Best Greedy-like Heuristic for the Weighted Set Covering Problem?]," ''Operations Research Letters'', vol. 44, pp. 366-369, 2016. </ref><br />
<br />
Second, many problems in different industries can be formulated as set covering problems. For example, scheduling machines to perform certain jobs can be thought of as covering the jobs. Picking the optimal location for a cell tower so that it covers the maximum number of customers is another set covering application. Moreover, this problem has many applications in the airline industry, and it was explored on an industrial scale as early as the 1970s. <ref name="two"> J. Rubin, [https://www.jstor.org/stable/25767684?seq=1 "A Technique for the Solution of Massive Set Covering Problems, with Application to Airline Crew Scheduling]," ''Transportation Science'', vol. 7, pp. 34-48, 1973. </ref><br />
<br />
== Problem formulation ==<br />
In the set covering problem, two sets are given: a set <math> U </math> of elements and a set <math> S </math> of subsets of the set <math> U </math>. Each subset in <math> S </math> is associated with a predetermined cost, and the union of all the subsets covers the set <math> U </math>. This combinatorial problem then concerns finding the optimal number of subsets whose union covers the universal set while minimizing the total cost.<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref><br />
<br />
The mathematical formulation of the set covering problem is define as follows. We define <math> U </math> = { <math> u_i,..., u_m </math>} as the universe of elements and <math> S </math> = { <math> s_i,..., s_n </math>} as a collection of subsets such that <math> s_i \subset U </math> and the union of <math> s_i</math> covers all elements in <math> U </math> (i.e. <math>\cup</math><math> s_i</math> = <math> U </math> ). Addionally, each set <math> s_i</math> must cover at least one element of <math> U </math> and has associated cost <math> c_i</math> such that <math> c_i > 0</math>. The objective is to find the minimum cost sub-collection of sets <math> X </math> <math>\subset</math> <math> S </math> that covers all the elements in the universe <math> U </math>.<br />
<br />
== Integer linear program formulation ==<br />
An integer linear program (ILP) model can be formulated for the minimum set covering problem as follows:<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref><br />
<br />
'''Decision variables'''<br />
<br />
<math> y_i = \begin{cases} 1, & \text{if subset }i\text{ is selected} \\ 0, & \text{otherwise } \end{cases}</math><br />
<br />
'''Objective function'''<br />
<br />
minimize <math>\sum_{i=1}^n c_i y_i</math> <br />
<br />
'''Constraints '''<br />
<br />
<math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> <br />
<br />
<math> y_i \in \{0, 1\}, \forall i = 1,....,n</math> <br />
<br />
The objective function <math>\sum_{i=1}^n c_i y_i</math> is defined to minimize the number of subset <math> s_i</math> that cover all elements in the universe by minimizing their total cost. The first constraint implies that every element <math> i </math> in the universe <math> U </math> must be be covered and the second constraint <math> y_i \in \{0, 1\} </math> indicates that the decision variables are binary which means that every set is either in the set cover or not.<br />
<br />
Set covering problems are significant NP-hard optimization problems, which implies that as the size of the problem increases, the computational time to solve it increases exponentially. Therefore, there exist approximation algorithms that can solve large scale problems in polynomial time with optimal or near-optimal solutions. In subsequent sections, we will cover two of the most widely used approximation methods to solve set cover problem in polynomial time which are linear program relaxation methods and classical greedy algorithms. <ref name="seven" /><br />
<br />
== Approximation via LP relaxation and rounding ==<br />
Set covering is a classical integer programming problem and solving integer program in general is NP-hard. Therefore, one approach to achieve an <math> O</math>(log<math>n</math>) approximation to set covering problem in polynomial time is solving via linear programming (LP) relaxation algorithms <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref>. In LP relaxation, we relax the integrality requirement into a linear constraints. For instance, if we replace the constraints <math> y_i \in \{0, 1\}</math> with the constraints <math> 0 \leq y_i \leq 1 </math>, we obtain the following LP problem that can be solved in polynomial time:<br />
<br />
minimize <math>\sum_{i=1}^n c_i y_i</math> <br />
<br />
subject to <math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> <br />
<br />
<math> 0 \leq y_i\leq 1, \forall i = 1,....,n</math><br />
<br />
The above LP formulation is a relaxation of the original ILP set cover problem. This means that every feasible solution of the integer program is also feasible for this LP program. Additionally, the value of any feasible solution for the integer program is the same value in LP since the objective functions of both integer and linear programs are the same. Solving the LP program will result in an optimal solution that is a lower bound for the original integer program since the minimization of LP finds a feasible solution of lowest possible values. Moreover, we use LP rounding algorithms to directly round the fractional LP solution to an integral combinatorial solution as follows:<br />
<br><br />
<br />
<br />
'''Deterministic rounding algorithm''' <br />
<br><br />
<br />
Suppose we have an optimal solution <math> z^* </math> for the linear programming relaxation of the set cover problem. We round the fractional solution <math> z^* </math> to an integer solution <math> z </math> using LP rounding algorithm. In general, there are two approaches for rounding algorithms, deterministic and randomized rounding algorithm. In this section, we will explain the deterministic algorithms. In this approach, we include subset <math> s_i </math> in our solution if <math> z^* \geq 1/d </math>, where <math> d </math> is the maximum number of sets in which any element appears. In practice, we set <math> z </math> to be as follows:<ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref><br />
<br />
<math> z = \begin{cases} 1, & \text{if } z^*\geq 1/d \\ 0, & \text{otherwise } \end{cases}</math><br />
<br />
The rounding algorithm is an approximation algorithm for the set cover problem. It is clear that the algorithm converge in polynomial time and <math> z </math> is a feasible solution to the integer program.<br />
<br />
== Greedy approximation algorithm ==<br />
Greedy algorithms can be used to approximate for optimal or near-optimal solutions for large scale set covering instances in polynomial solvable time. <ref name="seven" /> <ref name="nine" /> The greedy heuristics applies iterative process that, at each stage, select the largest number of uncovered elements in the universe <math> U </math>, and delete the uncovered elements, until all elements are covered. <ref name="ten"> V. Chvatal, [https://pubsonline.informs.org/doi/abs/10.1287/moor.4.3.233 "Greedy Heuristic for the Set-Covering Problem]," ''Mathematics of Operations Research'', vol. 4, pp. 233-235, 1979. </ref> Let <math> T </math> be the set that contain the covered elements, and <math> U </math> be the set that contain the elements of <math> Y </math> that still uncovered. At the beginning of the iteration, <math> T </math> is empty and all elements <math> Y \in U </math>. We iteratively select the set of <math> S </math> that covers the largest number of elements in <math> U </math> and add it to the covered elements in <math> T </math>. An example of this algorithm is presented below. <br />
<br />
'''Greedy algorithm for minimum set cover example: '''<br />
<br />
Step 0: <math> \quad </math> <math> T \in \Phi </math> <math> \quad \quad \quad \quad \quad </math> { <math> T </math> stores the covered elements }<br />
<br />
Step 1: <math> \quad </math> '''While''' <math> U \neq \Phi </math> '''do:''' <math> \quad </math> { <math> U </math> stores the uncovered elements <math> Y </math>}<br />
<br />
Step 2: <math> \quad \quad \quad </math> select <math> s_i \in S </math> that covers the highest number of elements in <math> U </math><br />
<br />
Step 3: <math> \quad \quad \quad </math> add <math> s_i </math> to <math> T </math><br />
<br />
Step 4: <math> \quad \quad \quad </math> remove <math> s_i </math> from <math> U </math><br />
<br />
Step 5: <math> \quad </math> '''End while''' <br />
<br />
Step 6: <math> \quad </math> '''Return''' <math> S </math><br />
<br />
==Numerical Example==<br />
Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8.<br />
<br />
We are given that camera location 1 covers stadium areas {1,3,4,6,7}, camera location 2 covers stadium areas {4,7,8,12}, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below:<br />
{| class="wikitable"<br />
|+Table 1 Camera Location vs Stadium Area<br />
|-<br />
!camera Location<br />
|1<br />
|2<br />
|3<br />
|4<br />
|5<br />
|6<br />
|7<br />
|8<br />
|-<br />
!stadium area<br />
|1,3,4,6,7<br />
|4,7,8,12<br />
|2,5,9,11,13<br />
|1,2,14,15<br />
|3,6,10,12,14<br />
|8,14,15<br />
|1,2,6,11<br />
|1,2,4,6,8,12<br />
|}<br />
<br />
We can then represent the above information using binary values. If the stadium area <math>i</math> can be covered with camera location <math>j</math>, then we have <math>y_{ij} = 1</math>. If not,<math>y_{ij} = 0</math>. For instance, stadium area 1 is covered by camera location 1, so <math>y_{11} = 1</math>, while stadium area 1 is not covered by camera location 2, so <math>y_{12} = 0</math>. The binary variables <math>y_{ij}</math> values are given in the table below: <br />
{| class="wikitable"<br />
|+Table 2 Binary Table (All Camera Locations and Stadium Areas)<br />
!<br />
!Camera1<br />
!Camera2<br />
!Camera3<br />
!Camera4<br />
!Camera5<br />
!Camera6<br />
!Camera7<br />
!Camera8<br />
|-<br />
!Stadium1<br />
|1<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium2<br />
|<br />
|<br />
|1<br />
|1<br />
|<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium3<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium4<br />
|1<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|1<br />
|-<br />
!Stadium5<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium6<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium7<br />
|1<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium8<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|-<br />
!Stadium9<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium10<br />
|<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium11<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|-<br />
!Stadium12<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|-<br />
!Stadium13<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium14<br />
|<br />
|<br />
|<br />
|1<br />
|1<br />
|1<br />
|<br />
|<br />
|-<br />
!Stadium15<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|<br />
|<br />
|}<br />
<br />
<br />
<br />
We introduce another binary variable <math>z_j</math> to indicate if a camera is installed at location <math>j</math>. <math>z_j = 1</math> if camera is installed at location <math>j</math>, while <math>z_j = 0</math> if not. <br />
<br />
Our objective is to minimize <math>\sum_{j=1}^8 z_j</math>. For each stadium, there’s a constraint that the stadium area <math>i</math> has to be covered by at least one camera location. For instance, for stadium area 1, we have <math>z_1 + z_4 + z_7 + z_8 \geq 1</math>, while for stadium 2, we have <math>z_3 + z_4 + z_7 + z_8 \geq 1</math>. All the 15 constraints that corresponds to 15 stadium areas are listed below:<br />
<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math> <br />
<br />
''s.t. Constraints 1 to 15 are satisfied:''<br />
<br />
<math> z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math> z_3 + z_4 + z_7 + z_8 \geq 1 (2)</math><br />
<br />
<math> z_1 + z_5 \geq 1 (3)</math><br />
<br />
<math> z_1 + z_2 + z_8 \geq 1 (4)</math><br />
<br />
<math> z_3 \geq 1 (5)</math><br />
<br />
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_3 \geq 1 (9)</math><br />
<br />
<math>z_5 \geq 1 (10)</math><br />
<br />
<math>z_3 + z_7 \geq 1 (11)</math><br />
<br />
<math>z_2 + z_5 + z_8 \geq 1 (12)</math><br />
<br />
<math>z_3 \geq 1 (13)</math><br />
<br />
<math>z_4 + z_5 + z_6 \geq 1 (14)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
From constraint {5,9,13}, we can obtain <math>z_3 = 1</math>. Thus we no longer need constraint 2 and 11 as they are satisfied when <math>z_3 = 1</math>. With <math>z_3 = 1</math> determined, the constraints left are:<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math>, <br />
<br />
s.t.:<br />
<br />
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math>z_1 + z_5 \geq 1 (3)</math><br />
<br />
<math>z_1 + z_2 + z_8 \geq 1 (4)</math><br />
<br />
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_5 \geq 1 (10)</math><br />
<br />
<math>z_2 + z_5 + z_8 \geq 1 (12)</math><br />
<br />
<math>z_4 + z_5 + z_6 \geq 1 (14)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
Now if we take a look at constraint <math>10. z_5 \geqslant 1</math> so <math>z_5</math> shall equal to 1. As <math>z_5 = 1</math>, constraint {3,6,12,14} are satisfied no matter what other <math>z</math> values are taken. If we also take a look at constraint 7 and 4, if constraint 4 will be satisfied as long as constraint 7 is satisfied since <math>z</math> values are nonnegative, so constraint 4 is no longer needed. The remaining constraints are:<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math><br />
<br />
s.t.:<br />
<br />
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
The next step is to focus on constraint 7 and 15. We can have at least 4 combinations of <math>z_1, z_2, z_4, z_6</math>values.<br />
<br />
<br />
<math>A: z_1 = 1, z_2 = 0, z_4 = 1, z_6 = 0</math><br />
<br />
<math>B: z_1 = 1, z_2 = 0, z_4 = 0, z_6 = 1</math><br />
<br />
<math>C: z_1 = 0, z_2 = 1, z_4 = 1, z_6 = 0</math><br />
<br />
<math>D: z_1 = 0, z_2 = 1, z_4 = 0, z_6 = 1</math><br />
<br />
<br />
We can then discuss each combination and determine <math>z_7, z_8</math>values for constraint 1 and 8 to be satisfied.<br />
<br />
<br />
Combination <math>A</math>: constraint 1 already satisfied, we need <math>z_8 = 1</math> to satisfy constraint 8.<br />
<br />
Combination <math>B</math>: constraint 1 already satisfied, constraint 8 already satisfied.<br />
<br />
Combination <math>C</math>: constraint 1 already satisfied, constraint 8 already satisfied.<br />
<br />
Combination <math>D</math>: we need <math>z_7 = 1</math> or <math>z_8 = 1</math> to satisfy constraint 1, while constraint 8 already satisfied.<br />
<br />
Our final step is to compare the four combinations. Since our objective is to minimize <math>\sum_{j=1}^8 z_j</math> and combinations <math>B</math> and <math>C</math> require the least amount of <math>z_j</math> to be 1, they are the optimal solutions.<br />
<br />
To conclude, our two solutions are:<br />
<br />
<math>Solution 1: z_1 = 1, z_3 = 1, z_5 = 1, z_6 = 1</math><br />
<br />
<math>Solution 2: z_2 = 1, z_3 = 1, z_4 = 1, z_5 = 1</math><br />
<br />
The minimum number of cameras that we need to install is 4.<br />
<br />
<br />
<br />
<br />
'''Let's now consider solving the problem using the greedy algorithm.''' <br />
<br />
We have a set <math>U</math> (stadium areas) that needs to be covered with <math>C</math> (camera locations). <br />
<br />
<br />
<math>U = \{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math><br />
<br />
<math>C = \{C_1,C_2,C_3,C_4,C_5,C_6,C_7,C_8\}</math><br />
<br />
<math>C_1 = \{1,3,4,6,7\} </math><br />
<br />
<math>C_2 = \{4,7,8,12\}</math><br />
<br />
<math>C_3 = \{2,5,9,11,13\}</math><br />
<br />
<math>C_4 = \{1,2,14,15\}</math><br />
<br />
<math>C_5 = \{3,6,10,12,14\}</math><br />
<br />
<math>C_6 = \{8,14,15\}</math><br />
<br />
<math>C_7 = \{1,2,6,11\}</math><br />
<br />
<math>C_8 = \{1,2,4,6,8,12\} </math><br />
<br />
<br />
The cost of each Camera Location is the same in this case, we just hope to minimize the total number of cameras used, so we can assume the cost of each <math>C</math> to be 1.<br />
<br />
Let <math>I</math> represents set of elements included so far. Initialize <math>I</math> to be empty.<br />
<br />
First Iteration: <br />
<br />
The per new element cost for <math>C_1 = 1/5</math>, for <math>C_2 = 1/4</math>, for <math>C_3 = 1/5</math>, for <math>C_4 = 1/4</math>, for <math>C_5 = 1/5</math>, for <math>C_6 = 1/3</math>, for <math>C_7 = 1/4</math>, for <math>C_8 = 1/6</math><br />
<br />
Since <math>C_8</math> has minimum value, <math>C_8</math> is added, and <math>I</math> becomes <math>\{1,2,4,6,8,12\}</math>.<br />
<br />
Second Iteration: <br />
<br />
<math>I</math> = <math>\{1,2,4,6,8,12\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_3 = 1/4</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math><br />
<br />
Since <math>C_3</math> has minimum value, <math>C_3</math> is added, and <math>I</math> becomes <math>\{1,2,4,5,6,8,9,11,12,13\}</math>.<br />
<br />
Third Iteration:<br />
<br />
<math>I</math> = <math>\{1,2,4,5,6,8,9,11,12,13\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math><br />
<br />
Since <math>C_5</math> has minimum value, <math>C_5</math> is added, and <math>I</math> becomes <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math>.<br />
<br />
Fourth Iteration:<br />
<br />
<math>I</math> = <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/1</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/0</math>, for <math>C_6 = 1/1</math>, for <math>C_7 = 1/0</math><br />
<br />
Since <math>C_1</math>, <math>C_2</math>, <math>C_6</math> all have meaningful and the same values, we can choose either both <math>C_1</math> and <math>C_6</math> or both <math>C_2</math> and <math>C_6</math>, as <math>C_1</math> or <math>C_2 </math> add <math>7</math> to <math>I</math>, and <math>C_6</math> add <math>15</math> to <math>I</math>.<br />
<br />
<math>I</math> becomes <math>\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math>.<br />
<br />
The solution we obtained is: <br />
<br />
Option 1: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_1</math><br />
<br />
Option 2: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_2</math><br />
<br />
The greedy algorithm does not provide the optimal solution in this case.<br />
<br />
The usual elimination algorithm would give us the minimum number of cameras that we need to install to be4, but the greedy algorithm gives us the minimum number of cameras that we need to install is 5.<br />
<br />
== Applications==<br />
<br />
The applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following.<br />
;The optimal location problem<br />
<br />
This set covering problems is concerned with maximizing the coverage of some public facilities placed at different locations. <ref name="three"> R. Church and C. ReVelle, [https://link.springer.com/article/10.1007/BF01942293 "The maximal covering location problem]," ''Papers of the Regional Science Association'', vol. 32, pp. 101-118, 1974. </ref> Consider the problem of placing fire stations to serve the towns of some city. <ref name="four"> E. Aktaş, Ö. Özaydın, B. Bozkaya, F. Ülengin, and Ş. Önsel, [https://pubsonline.informs.org/doi/10.1287/inte.1120.0671 "Optimizing Fire Station Locations for the Istanbul Metropolitan Municipality]," ''Interfaces'', vol. 43, pp. 240-255, 2013. </ref> If each fire station can serve its town and all adjacent towns, we can formulate a set covering problem where each subset consists of a set of adjacent towns. The problem is then solved to minimize the required number of fire stations to serve the whole city. <br />
<br />
Let <math> y_i </math> be the decision variable corresponding to choosing to build a fire station at town <math> i </math>. Let <math> S_i </math> be a subset of towns including town <math> i </math> and all its neighbors. The problem is then formulated as follows.<br />
<br />
minimize <math>\sum_{i=1}^n y_i</math> <br />
<br />
such that <math> \sum_{i\in S_i} y_i \geq 1, \forall i</math> <br />
<br />
A real-world case study involving optimizing fire station locations in Istanbul is analyzed in this reference. <ref name="four" /> The Istanbul municipality serves 790 subdistricts, which should all be covered by a fire station. Each subdistrict is considered covered if it has a neighboring district (a district at most 5 minutes away) that has a fire station. For detailed computational analysis, we refer the reader to the mentioned academic paper.<br />
; The optimal route selection problem<br />
<br />
Consider the problem of selecting the optimal bus routes to place pothole detectors. Due to the scarcity of the physical sensors, the problem does not allow for placing a detector on every road. The task of finding the maximum coverage using a limited number of detectors could be formulated as a set covering problem. <ref name="five"> J. Ali and V. Dyo, [https://www.scitepress.org/Link.aspx?doi=10.5220/0006469800830088 "Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach]," ''Proceedings of the 14th International Joint Conference on E-Business and Telecommunications'', pp. 83-88, 2017. </ref> <ref name="eleven"> P.H. Cruz Caminha , R. De Souza Couto , L.H. Maciel Kosmalski Costa , A. Fladenmuller , and M. Dias de Amorim, [https://www.mdpi.com/1424-8220/18/6/1976 "On the Coverage of Bus-Based Mobile Sensing]," ''Sensors'', 2018. </ref> Specifically, giving a collection of bus routes '''''R''''', where each route itself is divided into segments. Route <math> i </math> is denoted by <math> R_i </math>, and segment <math> j </math> is denoted by <math> S_j </math>. The segments of two different routes can overlap, and each segment is associated with a length <math> a_j </math>. The goal is then to select the routes that maximize the total covered distance.<br />
<br />
This is quite different from other applications because it results in a maximization formulation, rather than a minimization formulation. Suppose we want to use at most <math> k </math> different routes. We want to find <math> k </math> routes that maximize the length of of covered segments. Let <math> x_i </math> be the binary decision variable corresponding to selecting route <math> R_i </math>, and let <math> y_j </math> be the decision variable associated with covering segment <math> S_j </math>. Let us also denote the set of routes that cover segment <math> j </math> by <math> C_j </math>. The problem is then formulated as follows.<br />
<br />
<math><br />
\begin{align}<br />
\text{max} & ~~ \sum_{j} a_jy_j\\<br />
\text{s.t} & ~~ \sum_{i\in C_j} x_i \geq y_j \quad \forall j \\<br />
& ~~ \sum_{i} x_i = k \\ <br />
& ~~ x_i,y_{j} \in \{0,1\} \\<br />
\end{align}<br />
</math><br />
<br />
The work by Ali and Dyo explores a greedy approximation algorithm to solve an optimal selection problem including 713 bus routes in Greater London. <ref name="five" /> Using 14% of the routes only (100 routes), the greedy algorithm returns a solution that covers 25% of the segments in Greater London. For a details of the approximation algorithm and the world case study, we refer the reader to this reference. <ref name="five" /> For a significantly larger case study involving 5747 buses covering 5060km, we refer the reader to this academic article. <ref name="eleven" /><br />
;The airline crew scheduling problem<br />
<br />
An important application of large-scale set covering is the airline crew scheduling problem, which pertains to assigning airline staff to work shifts. <ref name="two" /> <ref name="six"> E. Marchiori and A. Steenbeek, [https://link.springer.com/chapter/10.1007/3-540-45561-2_36 "An Evolutionary Algorithm for Large Scale Set Covering Problems with Application to Airline Crew Scheduling]," ''Real-World Applications of Evolutionary Computing. EvoWorkshops 2000. Lecture Notes in Computer Science'', 2000. </ref> Thinking of the collection of flights as a universal set to be covered, we can formulate a set covering problem to search for the optimal assignment of employees to flights. Due to the complexity of airline schedules, this problem is usually divided into two subproblems: crew pairing and crew assignment. We refer the interested reader to this survey, which contains several problem instances with the number of flights ranging from 1013 to 7765 flights, for a detailed analysis of the formulation and algorithms that pertain to this significant application. <ref name="two" /> <ref name="eight"> A. Kasirzadeh, M. Saddoune, and F. Soumis [https://www.sciencedirect.com/science/article/pii/S2192437620300820?via%3Dihub "Airline crew scheduling: models, algorithms, and data sets]," ''EURO Journal on Transportation and Logistics'', vol. 6, pp. 111-137, 2017. </ref><br />
<br />
==Conclusion ==<br />
<br />
The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications. <br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Set_covering_problem&diff=2704Set covering problem2020-12-21T10:37:02Z<p>Wc593: /* Integer linear program formulation */</p>
<hr />
<div>Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud<br />
<br><br />
Steward: Allen Yang, Fengqi You<br />
<br />
== Introduction ==<br />
<br />
The set covering problem is a significant NP-hard problem in combinatorial optimization. Given a collection of elements, the set covering problem aims to find the minimum number of sets that incorporate (cover) all of these elements. <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref><br />
<br />
The set covering problem importance has two main aspects: one is pedagogical, and the other is practical. <br />
<br />
First, because many greedy approximation methods have been proposed for this combinatorial problem, studying it gives insight into the use of approximation algorithms in solving NP-hard problems. Thus, it is a primal example in teaching computational algorithms. We present a preview of these methods in a later section, and we refer the interested reader to these references for a deeper discussion. <ref name="one" /> <ref name="seven"> P. Slavı́k, [https://www.sciencedirect.com/science/article/abs/pii/S0196677497908877 "A Tight Analysis of the Greedy Algorithm for Set Cover]," ''Journal of Algorithms,'', vol. 25, pp. 237-245, 1997. </ref> <ref name="nine"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "What Is the Best Greedy-like Heuristic for the Weighted Set Covering Problem?]," ''Operations Research Letters'', vol. 44, pp. 366-369, 2016. </ref><br />
<br />
Second, many problems in different industries can be formulated as set covering problems. For example, scheduling machines to perform certain jobs can be thought of as covering the jobs. Picking the optimal location for a cell tower so that it covers the maximum number of customers is another set covering application. Moreover, this problem has many applications in the airline industry, and it was explored on an industrial scale as early as the 1970s. <ref name="two"> J. Rubin, [https://www.jstor.org/stable/25767684?seq=1 "A Technique for the Solution of Massive Set Covering Problems, with Application to Airline Crew Scheduling]," ''Transportation Science'', vol. 7, pp. 34-48, 1973. </ref><br />
<br />
== Problem formulation ==<br />
In the set covering problem, two sets are given: a set <math> U </math> of elements and a set <math> S </math> of subsets of the set <math> U </math>. Each subset in <math> S </math> is associated with a predetermined cost, and the union of all the subsets covers the set <math> U </math>. This combinatorial problem then concerns finding the optimal number of subsets whose union covers the universal set while minimizing the total cost.<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref><br />
<br />
The mathematical formulation of the set covering problem is define as follows. We define <math> U </math> = { <math> u_i,..., u_m </math>} as the universe of elements and <math> S </math> = { <math> s_i,..., s_n </math>} as a collection of subsets such that <math> s_i \subset U </math> and the union of <math> s_i</math> covers all elements in <math> U </math> (i.e. <math>\cup</math><math> s_i</math> = <math> U </math> ). Addionally, each set <math> s_i</math> must cover at least one element of <math> U </math> and has associated cost <math> c_i</math> such that <math> c_i > 0</math>. The objective is to find the minimum cost sub-collection of sets <math> X </math> <math>\subset</math> <math> S </math> that covers all the elements in the universe <math> U </math>.<br />
<br />
== Integer linear program formulation ==<br />
An integer linear program (ILP) model can be formulated for the minimum set covering problem as follows:<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref><br />
<br />
'''Decision variables'''<br />
<br />
<math> y_i = \begin{cases} 1, & \text{if subset }i\text{ is selected} \\ 0, & \text{otherwise } \end{cases}</math><br />
<br />
'''Objective function'''<br />
<br />
minimize <math>\sum_{i=1}^n c_i y_i</math> <br />
<br />
'''Constraints '''<br />
<br />
<math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> <br />
<br />
<math> y_i \in \{0, 1\}, \forall i = 1,....,n</math> <br />
<br />
The objective function <math>\sum_{i=1}^n c_i y_i</math> is defined to minimize the number of subset <math> s_i</math> that cover all elements in the universe by minimizing their total cost. The first constraint implies that every element <math> i </math> in the universe <math> U </math> must be be covered and the second constraint <math> y_i \in \{0, 1\} </math> indicates that the decision variables are binary which means that every set is either in the set cover or not.<br />
<br />
Set covering problems are significant NP-hard optimization problems, which implies that as the size of the problem increases, the computational time to solve it increases exponentially. Therefore, there exist approximation algorithms that can solve large scale problems in polynomial time with optimal or near-optimal solutions. In subsequent sections, we will cover two of the most widely used approximation methods to solve set cover problem in polynomial time which are linear program relaxation methods and classical greedy algorithms. <ref name="seven" /><br />
<br />
== Approximation via LP relaxation and rounding ==<br />
Set covering is a classical integer programming problem and solving integer program in general is NP-hard. Therefore, one approach to achieve an <math> O</math>(log<math>n</math>) approximation to set covering problem in polynomial time is solving via linear programming (LP) relaxation algorithms <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref>. In LP relaxation, we relax the integrality requirement into a linear constraints. For instance, if we replace the constraints <math> y_i \in \{0, 1\}</math> with the constraints <math> 0 =< y_i <= 1 </math>, we obtain the following LP problem that can be solved in polynomial time:<br />
<br />
minimize <math>\sum_{i=1}^n c_i y_i</math> <br />
<br />
subject to <math> \sum_{i=1}^n y_i >= 1, \forall i= 1,....,m</math> <br />
<br />
<math> 0 =< y_i<= 1, \forall i = 1,....,n</math><br />
<br />
The above LP formulation is a relaxation of the original ILP set cover problem. This means that every feasible solution of the integer program is also feasible for this LP program. Additionally, the value of any feasible solution for the integer program is the same value in LP since the objective functions of both integer and linear programs are the same. Solving the LP program will result in an optimal solution that is a lower bound for the original integer program since the minimization of LP finds a feasible solution of lowest possible values. Moreover, we use LP rounding algorithms to directly round the fractional LP solution to an integral combinatorial solution as follows:<br />
<br><br />
<br />
<br />
'''Deterministic rounding algorithm''' <br />
<br><br />
<br />
Suppose we have an optimal solution <math> z^* </math> for the linear programming relaxation of the set cover problem. We round the fractional solution <math> z^* </math> to an integer solution <math> z </math> using LP rounding algorithm. In general, there are two approaches for rounding algorithms, deterministic and randomized rounding algorithm. In this section, we will explain the deterministic algorithms.In this approach, we include subset <math> s_i </math> in our solution if <math> z^* >= 1/d </math>, where <math> d </math> is the maximum number of sets in which any element appears. In practice, we set <math> z </math> to be as follows:<ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref><br />
<br />
<math> z = \begin{cases} 1, & \text{if } z^*>= 1/d \\ 0, & \text{otherwise } \end{cases}</math><br />
<br />
The rounding algorithm is an approximation algorithm for the set cover problem. It is clear that the algorithm converge in polynomial time and <math> z </math> is a feasible solution to the integer program.<br />
<br />
== Greedy approximation algorithm ==<br />
Greedy algorithms can be used to approximate for optimal or near-optimal solutions for large scale set covering instances in polynomial solvable time. <ref name="seven" /> <ref name="nine" /> The greedy heuristics applies iterative process that, at each stage, select the largest number of uncovered elements in the universe <math> U </math>, and delete the uncovered elements, until all elements are covered. <ref name="ten"> V. Chvatal, [https://pubsonline.informs.org/doi/abs/10.1287/moor.4.3.233 "Greedy Heuristic for the Set-Covering Problem]," ''Mathematics of Operations Research'', vol. 4, pp. 233-235, 1979. </ref> Let <math> T </math> be the set that contain the covered elements, and <math> U </math> be the set that contain the elements of <math> Y </math> that still uncovered. At the beginning of the iteration, <math> T </math> is empty and all elements <math> Y \in U </math>. We iteratively select the set of <math> S </math> that covers the largest number of elements in <math> U </math> and add it to the covered elements in <math> T </math>. An example of this algorithm is presented below. <br />
<br />
'''Greedy algorithm for minimum set cover example: '''<br />
<br />
Step 0: <math> \quad </math> <math> T \in \Phi </math> <math> \quad \quad \quad \quad \quad </math> { <math> T </math> stores the covered elements }<br />
<br />
Step 1: <math> \quad </math> '''While''' <math> U \neq \Phi </math> '''do:''' <math> \quad </math> { <math> U </math> stores the uncovered elements <math> Y </math>}<br />
<br />
Step 2: <math> \quad \quad \quad </math> select <math> s_i \in S </math> that covers the highest number of elements in <math> U </math><br />
<br />
Step 3: <math> \quad \quad \quad </math> add <math> s_i </math> to <math> T </math><br />
<br />
Step 4: <math> \quad \quad \quad </math> remove <math> s_i </math> from <math> U </math><br />
<br />
Step 5: <math> \quad </math> '''End while''' <br />
<br />
Step 6: <math> \quad </math> '''Return''' <math> S </math><br />
<br />
==Numerical Example==<br />
Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8.<br />
<br />
We are given that camera location 1 covers stadium areas {1,3,4,6,7}, camera location 2 covers stadium areas {4,7,8,12}, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below:<br />
{| class="wikitable"<br />
|+Table 1 Camera Location vs Stadium Area<br />
|-<br />
!camera Location<br />
|1<br />
|2<br />
|3<br />
|4<br />
|5<br />
|6<br />
|7<br />
|8<br />
|-<br />
!stadium area<br />
|1,3,4,6,7<br />
|4,7,8,12<br />
|2,5,9,11,13<br />
|1,2,14,15<br />
|3,6,10,12,14<br />
|8,14,15<br />
|1,2,6,11<br />
|1,2,4,6,8,12<br />
|}<br />
<br />
We can then represent the above information using binary values. If the stadium area <math>i</math> can be covered with camera location <math>j</math>, then we have <math>y_{ij} = 1</math>. If not,<math>y_{ij} = 0</math>. For instance, stadium area 1 is covered by camera location 1, so <math>y_{11} = 1</math>, while stadium area 1 is not covered by camera location 2, so <math>y_{12} = 0</math>. The binary variables <math>y_{ij}</math> values are given in the table below: <br />
{| class="wikitable"<br />
|+Table 2 Binary Table (All Camera Locations and Stadium Areas)<br />
!<br />
!Camera1<br />
!Camera2<br />
!Camera3<br />
!Camera4<br />
!Camera5<br />
!Camera6<br />
!Camera7<br />
!Camera8<br />
|-<br />
!Stadium1<br />
|1<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium2<br />
|<br />
|<br />
|1<br />
|1<br />
|<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium3<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium4<br />
|1<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|1<br />
|-<br />
!Stadium5<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium6<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|1<br />
|-<br />
!Stadium7<br />
|1<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium8<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|-<br />
!Stadium9<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium10<br />
|<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium11<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|-<br />
!Stadium12<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|1<br />
|-<br />
!Stadium13<br />
|<br />
|<br />
|1<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
!Stadium14<br />
|<br />
|<br />
|<br />
|1<br />
|1<br />
|1<br />
|<br />
|<br />
|-<br />
!Stadium15<br />
|<br />
|<br />
|<br />
|1<br />
|<br />
|1<br />
|<br />
|<br />
|}<br />
<br />
<br />
<br />
We introduce another binary variable <math>z_j</math> to indicate if a camera is installed at location <math>j</math>. <math>z_j = 1</math> if camera is installed at location <math>j</math>, while <math>z_j = 0</math> if not. <br />
<br />
Our objective is to minimize <math>\sum_{j=1}^8 z_j</math>. For each stadium, there’s a constraint that the stadium area <math>i</math> has to be covered by at least one camera location. For instance, for stadium area 1, we have <math>z_1 + z_4 + z_7 + z_8 \geq 1</math>, while for stadium 2, we have <math>z_3 + z_4 + z_7 + z_8 \geq 1</math>. All the 15 constraints that corresponds to 15 stadium areas are listed below:<br />
<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math> <br />
<br />
''s.t. Constraints 1 to 15 are satisfied:''<br />
<br />
<math> z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math> z_3 + z_4 + z_7 + z_8 \geq 1 (2)</math><br />
<br />
<math> z_1 + z_5 \geq 1 (3)</math><br />
<br />
<math> z_1 + z_2 + z_8 \geq 1 (4)</math><br />
<br />
<math> z_3 \geq 1 (5)</math><br />
<br />
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_3 \geq 1 (9)</math><br />
<br />
<math>z_5 \geq 1 (10)</math><br />
<br />
<math>z_3 + z_7 \geq 1 (11)</math><br />
<br />
<math>z_2 + z_5 + z_8 \geq 1 (12)</math><br />
<br />
<math>z_3 \geq 1 (13)</math><br />
<br />
<math>z_4 + z_5 + z_6 \geq 1 (14)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
From constraint {5,9,13}, we can obtain <math>z_3 = 1</math>. Thus we no longer need constraint 2 and 11 as they are satisfied when <math>z_3 = 1</math>. With <math>z_3 = 1</math> determined, the constraints left are:<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math>, <br />
<br />
s.t.:<br />
<br />
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math>z_1 + z_5 \geq 1 (3)</math><br />
<br />
<math>z_1 + z_2 + z_8 \geq 1 (4)</math><br />
<br />
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_5 \geq 1 (10)</math><br />
<br />
<math>z_2 + z_5 + z_8 \geq 1 (12)</math><br />
<br />
<math>z_4 + z_5 + z_6 \geq 1 (14)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
Now if we take a look at constraint <math>10. z_5 \geqslant 1</math> so <math>z_5</math> shall equal to 1. As <math>z_5 = 1</math>, constraint {3,6,12,14} are satisfied no matter what other <math>z</math> values are taken. If we also take a look at constraint 7 and 4, if constraint 4 will be satisfied as long as constraint 7 is satisfied since <math>z</math> values are nonnegative, so constraint 4 is no longer needed. The remaining constraints are:<br />
<br />
<br />
minimize <math>\sum_{j=1}^8 z_j</math><br />
<br />
s.t.:<br />
<br />
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math><br />
<br />
<math>z_1 + z_2 \geq 1 (7)</math><br />
<br />
<math>z_2 + z_6 + z_8 \geq 1 (8)</math><br />
<br />
<math>z_4 + z_6 \geq 1 (15)</math><br />
<br />
<br />
The next step is to focus on constraint 7 and 15. We can have at least 4 combinations of <math>z_1, z_2, z_4, z_6</math>values.<br />
<br />
<br />
<math>A: z_1 = 1, z_2 = 0, z_4 = 1, z_6 = 0</math><br />
<br />
<math>B: z_1 = 1, z_2 = 0, z_4 = 0, z_6 = 1</math><br />
<br />
<math>C: z_1 = 0, z_2 = 1, z_4 = 1, z_6 = 0</math><br />
<br />
<math>D: z_1 = 0, z_2 = 1, z_4 = 0, z_6 = 1</math><br />
<br />
<br />
We can then discuss each combination and determine <math>z_7, z_8</math>values for constraint 1 and 8 to be satisfied.<br />
<br />
<br />
Combination <math>A</math>: constraint 1 already satisfied, we need <math>z_8 = 1</math> to satisfy constraint 8.<br />
<br />
Combination <math>B</math>: constraint 1 already satisfied, constraint 8 already satisfied.<br />
<br />
Combination <math>C</math>: constraint 1 already satisfied, constraint 8 already satisfied.<br />
<br />
Combination <math>D</math>: we need <math>z_7 = 1</math> or <math>z_8 = 1</math> to satisfy constraint 1, while constraint 8 already satisfied.<br />
<br />
Our final step is to compare the four combinations. Since our objective is to minimize <math>\sum_{j=1}^8 z_j</math> and combinations <math>B</math> and <math>C</math> require the least amount of <math>z_j</math> to be 1, they are the optimal solutions.<br />
<br />
To conclude, our two solutions are:<br />
<br />
<math>Solution 1: z_1 = 1, z_3 = 1, z_5 = 1, z_6 = 1</math><br />
<br />
<math>Solution 2: z_2 = 1, z_3 = 1, z_4 = 1, z_5 = 1</math><br />
<br />
The minimum number of cameras that we need to install is 4.<br />
<br />
<br />
<br />
<br />
'''Let's now consider solving the problem using the greedy algorithm.''' <br />
<br />
We have a set <math>U</math> (stadium areas) that needs to be covered with <math>C</math> (camera locations). <br />
<br />
<br />
<math>U = \{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math><br />
<br />
<math>C = \{C_1,C_2,C_3,C_4,C_5,C_6,C_7,C_8\}</math><br />
<br />
<math>C_1 = \{1,3,4,6,7\} </math><br />
<br />
<math>C_2 = \{4,7,8,12\}</math><br />
<br />
<math>C_3 = \{2,5,9,11,13\}</math><br />
<br />
<math>C_4 = \{1,2,14,15\}</math><br />
<br />
<math>C_5 = \{3,6,10,12,14\}</math><br />
<br />
<math>C_6 = \{8,14,15\}</math><br />
<br />
<math>C_7 = \{1,2,6,11\}</math><br />
<br />
<math>C_8 = \{1,2,4,6,8,12\} </math><br />
<br />
<br />
The cost of each Camera Location is the same in this case, we just hope to minimize the total number of cameras used, so we can assume the cost of each <math>C</math> to be 1.<br />
<br />
Let <math>I</math> represents set of elements included so far. Initialize <math>I</math> to be empty.<br />
<br />
First Iteration: <br />
<br />
The per new element cost for <math>C_1 = 1/5</math>, for <math>C_2 = 1/4</math>, for <math>C_3 = 1/5</math>, for <math>C_4 = 1/4</math>, for <math>C_5 = 1/5</math>, for <math>C_6 = 1/3</math>, for <math>C_7 = 1/4</math>, for <math>C_8 = 1/6</math><br />
<br />
Since <math>C_8</math> has minimum value, <math>C_8</math> is added, and <math>I</math> becomes <math>\{1,2,4,6,8,12\}</math>.<br />
<br />
Second Iteration: <br />
<br />
<math>I</math> = <math>\{1,2,4,6,8,12\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_3 = 1/4</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math><br />
<br />
Since <math>C_3</math> has minimum value, <math>C_3</math> is added, and <math>I</math> becomes <math>\{1,2,4,5,6,8,9,11,12,13\}</math>.<br />
<br />
Third Iteration:<br />
<br />
<math>I</math> = <math>\{1,2,4,5,6,8,9,11,12,13\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math><br />
<br />
Since <math>C_5</math> has minimum value, <math>C_5</math> is added, and <math>I</math> becomes <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math>.<br />
<br />
Fourth Iteration:<br />
<br />
<math>I</math> = <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math><br />
<br />
The per new element cost for <math>C_1 = 1/1</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/0</math>, for <math>C_6 = 1/1</math>, for <math>C_7 = 1/0</math><br />
<br />
Since <math>C_1</math>, <math>C_2</math>, <math>C_6</math> all have meaningful and the same values, we can choose either both <math>C_1</math> and <math>C_6</math> or both <math>C_2</math> and <math>C_6</math>, as <math>C_1</math> or <math>C_2 </math> add <math>7</math> to <math>I</math>, and <math>C_6</math> add <math>15</math> to <math>I</math>.<br />
<br />
<math>I</math> becomes <math>\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math>.<br />
<br />
The solution we obtained is: <br />
<br />
Option 1: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_1</math><br />
<br />
Option 2: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_2</math><br />
<br />
The greedy algorithm does not provide the optimal solution in this case.<br />
<br />
The usual elimination algorithm would give us the minimum number of cameras that we need to install to be4, but the greedy algorithm gives us the minimum number of cameras that we need to install is 5.<br />
<br />
== Applications==<br />
<br />
The applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following.<br />
;The optimal location problem<br />
<br />
This set covering problems is concerned with maximizing the coverage of some public facilities placed at different locations. <ref name="three"> R. Church and C. ReVelle, [https://link.springer.com/article/10.1007/BF01942293 "The maximal covering location problem]," ''Papers of the Regional Science Association'', vol. 32, pp. 101-118, 1974. </ref> Consider the problem of placing fire stations to serve the towns of some city. <ref name="four"> E. Aktaş, Ö. Özaydın, B. Bozkaya, F. Ülengin, and Ş. Önsel, [https://pubsonline.informs.org/doi/10.1287/inte.1120.0671 "Optimizing Fire Station Locations for the Istanbul Metropolitan Municipality]," ''Interfaces'', vol. 43, pp. 240-255, 2013. </ref> If each fire station can serve its town and all adjacent towns, we can formulate a set covering problem where each subset consists of a set of adjacent towns. The problem is then solved to minimize the required number of fire stations to serve the whole city. <br />
<br />
Let <math> y_i </math> be the decision variable corresponding to choosing to build a fire station at town <math> i </math>. Let <math> S_i </math> be a subset of towns including town <math> i </math> and all its neighbors. The problem is then formulated as follows.<br />
<br />
minimize <math>\sum_{i=1}^n y_i</math> <br />
<br />
such that <math> \sum_{i\in S_i} y_i \geq 1, \forall i</math> <br />
<br />
A real-world case study involving optimizing fire station locations in Istanbul is analyzed in this reference. <ref name="four" /> The Istanbul municipality serves 790 subdistricts, which should all be covered by a fire station. Each subdistrict is considered covered if it has a neighboring district (a district at most 5 minutes away) that has a fire station. For detailed computational analysis, we refer the reader to the mentioned academic paper.<br />
; The optimal route selection problem<br />
<br />
Consider the problem of selecting the optimal bus routes to place pothole detectors. Due to the scarcity of the physical sensors, the problem does not allow for placing a detector on every road. The task of finding the maximum coverage using a limited number of detectors could be formulated as a set covering problem. <ref name="five"> J. Ali and V. Dyo, [https://www.scitepress.org/Link.aspx?doi=10.5220/0006469800830088 "Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach]," ''Proceedings of the 14th International Joint Conference on E-Business and Telecommunications'', pp. 83-88, 2017. </ref> <ref name="eleven"> P.H. Cruz Caminha , R. De Souza Couto , L.H. Maciel Kosmalski Costa , A. Fladenmuller , and M. Dias de Amorim, [https://www.mdpi.com/1424-8220/18/6/1976 "On the Coverage of Bus-Based Mobile Sensing]," ''Sensors'', 2018. </ref> Specifically, giving a collection of bus routes '''''R''''', where each route itself is divided into segments. Route <math> i </math> is denoted by <math> R_i </math>, and segment <math> j </math> is denoted by <math> S_j </math>. The segments of two different routes can overlap, and each segment is associated with a length <math> a_j </math>. The goal is then to select the routes that maximize the total covered distance.<br />
<br />
This is quite different from other applications because it results in a maximization formulation, rather than a minimization formulation. Suppose we want to use at most <math> k </math> different routes. We want to find <math> k </math> routes that maximize the length of of covered segments. Let <math> x_i </math> be the binary decision variable corresponding to selecting route <math> R_i </math>, and let <math> y_j </math> be the decision variable associated with covering segment <math> S_j </math>. Let us also denote the set of routes that cover segment <math> j </math> by <math> C_j </math>. The problem is then formulated as follows.<br />
<br />
<math><br />
\begin{align}<br />
\text{max} & ~~ \sum_{j} a_jy_j\\<br />
\text{s.t} & ~~ \sum_{i\in C_j} x_i \geq y_j \quad \forall j \\<br />
& ~~ \sum_{i} x_i = k \\ <br />
& ~~ x_i,y_{j} \in \{0,1\} \\<br />
\end{align}<br />
</math><br />
<br />
The work by Ali and Dyo explores a greedy approximation algorithm to solve an optimal selection problem including 713 bus routes in Greater London. <ref name="five" /> Using 14% of the routes only (100 routes), the greedy algorithm returns a solution that covers 25% of the segments in Greater London. For a details of the approximation algorithm and the world case study, we refer the reader to this reference. <ref name="five" /> For a significantly larger case study involving 5747 buses covering 5060km, we refer the reader to this academic article. <ref name="eleven" /><br />
;The airline crew scheduling problem<br />
<br />
An important application of large-scale set covering is the airline crew scheduling problem, which pertains to assigning airline staff to work shifts. <ref name="two" /> <ref name="six"> E. Marchiori and A. Steenbeek, [https://link.springer.com/chapter/10.1007/3-540-45561-2_36 "An Evolutionary Algorithm for Large Scale Set Covering Problems with Application to Airline Crew Scheduling]," ''Real-World Applications of Evolutionary Computing. EvoWorkshops 2000. Lecture Notes in Computer Science'', 2000. </ref> Thinking of the collection of flights as a universal set to be covered, we can formulate a set covering problem to search for the optimal assignment of employees to flights. Due to the complexity of airline schedules, this problem is usually divided into two subproblems: crew pairing and crew assignment. We refer the interested reader to this survey, which contains several problem instances with the number of flights ranging from 1013 to 7765 flights, for a detailed analysis of the formulation and algorithms that pertain to this significant application. <ref name="two" /> <ref name="eight"> A. Kasirzadeh, M. Saddoune, and F. Soumis [https://www.sciencedirect.com/science/article/pii/S2192437620300820?via%3Dihub "Airline crew scheduling: models, algorithms, and data sets]," ''EURO Journal on Transportation and Logistics'', vol. 6, pp. 111-137, 2017. </ref><br />
<br />
==Conclusion ==<br />
<br />
The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications. <br />
<br />
== References ==<br />
<references /></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27032020 Cornell Optimization Open Textbook Feedback2020-12-21T10:33:48Z<p>Wc593: /* Facility location problem */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Integer linear program formulation & Approximation via LP relaxation and rounding<br />
*# Use proper math notations for “greater than equal to”.<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Column generation algorithms]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory, methodology and algorithmic discussions<br />
*# Some minor typos/article agreement issues exist “is not partical in real-world”.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Please use proper symbol for "greater than or equal to".<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Facility_location_problem&diff=2702Facility location problem2020-12-21T10:33:29Z<p>Wc593: </p>
<hr />
<div>Authors: Liz Cantlebary, Lawrence Li (CHEME 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
The Facility Location Problem (FLP) is a classic optimization problem that determines the best location for a factory or warehouse to be placed based on geographical demands, facility costs, and transportation distances. These problems generally aim to maximize the supplier's profit based on the given customer demand and location<sup>(1)</sup>. FLP can be further broken down into capacitated and uncapacitated problems, depending on whether the facilities in question have a maximum capacity or not<sup>(2)</sup>. <br />
<br />
== Theory and Formulation ==<br />
<br />
=== Weber Problem and Single Facility FLPs ===<br />
The Weber Problem is a simple FLP that consists of locating the geometric median between three points with different weights. The geometric median is a point between three given points in space such that the sum of the distances between the median and the other three points is minimized. It is based on the premise of minimizing transportation costs from one point to various destinations, where each destination has a different associated cost per unit distance. <br />
<br />
Given <math>N</math> points <math>(a_1,b_1)...(a_N,b_N)</math> on a plane with associated weights <math>w_1...w_N</math>, the 2-dimensional Weber problem to find the geometric median <math>(x,y)</math> is formulated as<sup>(1)</sup><br />
<br />
<math>\min\begin{align} W(x,y) = \sum_{i=1}^Nw_id_i(x,y,a_i,b_i)\\ \end{align}</math><br />
<br />
where<br />
<br />
<math>d_i(x,y,a_i,b_i)=\sqrt{(x-a_i)^2+(y-b_i)^2}</math><br />
<br />
The above formulation serves as a foundation for many basic single facility FLPs. For example, the minisum problem aims to locate a facility at the point that minimizes the sum of the weighted distances to the given set of existing facilities, while the minimax problem consists of placing the facility at the point that minimizes the maximum weighted distance to the existing facilities<sup>(3)</sup>. Additionally, in contrast to the minimax problem, the maximin facility problem maximizes the minimum weighted distance to the given facilities.<br />
<br />
=== Capacitated and Uncapacitated FLPs ===<br />
FLPs can often be formulated as mixed-integer programs (MIPs), with a fixed set of facility and customer locations. Binary variables are used in these problems to represent whether a certain facility is open or closed and whether that facility can supply a certain customer. Capacitated and uncapacitated FLPs can be solved this way by defining them as integer programs. <br />
<br />
A capacitated facility problem applies constraints to the production and transportation capacity of each facility. As a result, customers may not be supplied by the most immediate facility, since this facility may not be able to satisfy the given customer demand. <br />
<br />
In a problem with <math>N</math> facilities and <math>M</math> customers, the capacitated formulation defines a binary variable <math>x_i</math> and a variable <math>y_{ij}</math> for each facility <math>i</math> and each customer <math>j</math>. If facility <math>i</math> is open, <math>x_i=1</math>; otherwise <math>x_i=0</math>. Open facilities have an associated fixed cost <math>f_i</math> and a maximum capacity <math>k_i</math>. <math>y_{ij}</math> is the fraction of the total demand <math>d_j</math> of customer <math>j</math> that facility <math>i</math> has satisfied and the transportation cost between facility <math>i</math> and customer <math>j</math> is represented as <math>t_{ij}</math>. The capacitated FLP is therefore defined as<sup>(2)</sup><br />
<br />
<math>\min\ \sum_{i=1}^N\sum_{j=1}^Md_jt_{ij}y_{ij}+\sum_{i=1}^Nf_ix_i</math><br />
<br />
<math>s.t.\ \sum_{i=1}^Ny_{ij}=1\ \ \forall\, j\in\{1,...,M\}</math><br />
<br />
<math>\quad \quad \sum_{j=1}^Md_jy_{ij}\leq k_ix_i\ \ \forall\, i\in\{1,...,N\}</math><br />
<br />
<math>\quad \quad y_{ij}\geq0\ \ \forall\, i\in\{1,...,N\},\ \forall\, j\in\{1,...,M\}</math><br />
<br />
<math>\quad \quad x_i\in\{0,1\}\ \ \forall\, i\in\{1,...,N\}</math><br />
<br />
In an uncapacitated facility problem, the amount of product each facility can produce and transport is assumed to be unlimited, and the optimal solution results in customers being supplied by the lowest-cost, and usually the nearest, facility. Using the above formulation, the unlimited capacity means <math>k_i</math> can be assumed to be a sufficiently large constant, while <math>y_{ij}</math> is now a binary variable, because the demand of each customer can be fully met with the nearest facility<sup>(2)</sup>. If facility <math>i</math> supplies customer <math>j</math>, then <math>y_{ij}=1</math>; otherwise <math>y_{ij}=0</math>.<br />
<br />
=== Approximate and Exact Algorithms ===<br />
A variety of approximate algorithms can be used to solve facility location problems. These algorithms terminate after a given number of steps based on the size of the problem, yielding a feasible solution with an error that does not exceed a constant approximation ratio<sup>(4)</sup>. This ratio <math>r</math> indicates that the approximate solution is no greater than the exact solution by a factor of <math>r</math>. <br />
<br />
While greedy algorithms generally do not perform well on FLPs, the primal-dual greedy algorithm presented by Jain and Vazirani tends to be faster in solving the uncapacitated FLP than LP-rounding algorithms, which solve the LP relaxation of the integer formulation and round the fractional results<sup>(4)</sup>. The Jain-Vazirani algorithm computes the primal and the dual to the LP relaxation simultaneously and guarantees a constant approximation ratio of 1.861<sup>(5)</sup>. This solver has a running time complexity of <math>O(m\log m)</math>, where <math>m</math> corresponds to the number of edges between facilities and cities. Improving upon this primal-dual approach, the modified Jain-Mahdian-Saberi algorithm guarantees a better approximation ratio for the uncapacitated problem<sup>(5)</sup>. <br />
<br />
To solve the capacitated FLP, which often contains more complex constraints, many algorithms utilize a Lagrangian decomposition<sup>(6)</sup>, first introduced by Held and Karp in the traveling salesman problem<sup>(7)</sup>. This approach allows constraints to be relaxed by penalizing this relaxation while solving a simplified problem. The capacitated problem has been effectively solved using this Lagrangian relaxation in conjunction with the volume algorithm, which is a variation of subgradient optimization presented by Barahona and Anbil<sup>(8)</sup>.<br />
<br />
Exact methods have also been presented for solving FLPs. To solve the <math>p<br />
</math>-median capacitated facility location problem, Ceselli introduces a branch-and-bound method that solves a Lagrangian relaxation with subgradient optimization, as well as a separate branch-and-price algorithm that utilizes column generation<sup>(9)</sup>. Ceselli's work indicates that branch-and-bound works well when the ratio of <math>p<br />
</math> sites to <math>N</math> customers is low, but the performance and run-time worsen significantly as this ratio increases. In comparison, the branch-and-price method demonstrates much more stable performance across various problem sizes and is generally faster overall.<br />
<br />
== Numerical Example ==<br />
Suppose a paper products manufacturer has enough capital to build and manage an additional manufacturing plant in the United States in order to meet increased demand in three cities: New York City, NY, Los Angeles, CA, and Topeka, KS. The company already has distribution facilities in Denver, CO, Seattle, WA, and St. Louis, MO, and due to limited capital, cannot build an additional distribution facility. So, they must choose to build their new plant in one of these three locations. Due to geographic constraints, plants in Denver, Seattle, and St. Louis would have a maximum operating capacity of 400 tons/day, 700 tons/day, and 600 tons/day, respectively. The cost of transporting the products from the plant to the city is directly proportional, and an outline of the supply, demand, and cost of transportation is shown in the figure below. Regardless of where the plant is built, the selling price of the product is $100/ton. <br />
[[File:Example.png|center|780x780px]]<br />
'''Exact Solution''' <br />
<br />
To solve this problem, we will assign the following variables: <br />
<br />
<math>i</math> is the factory location<br />
<br />
<math>j</math> is the city destination<br />
<br />
<math>C_{ij}</math> is the cost of transporting one ton of product from the factory to the city<br />
<br />
<math>x_{ij}</math> is the amount of product transported from the factory to the city in tons<br />
<br />
<math>A_i</math> is the maximum operating capacity at the factory <br />
<br />
<math>D_j</math> is the amount of unmet demand in the city <br />
<br />
<br />
To determine where the company should build the factory, we will carry out the following optimization problem for each location to maximize the profit from each ton sold:<br />
<br />
max <math>\sum_{j\in J}x_{ij}(100-C_{ij}) </math><br />
<br />
subject to<br />
<br />
<math>\sum_{j\in J}x_{ij} \leq A_i </math> <math>\forall i\in I</math><br />
<br />
<math>\sum_{i\in I}x_{ij} \leq D_j</math> <math>\forall j\in J</math><br />
<br />
<math>x_{ij} \geq 0 </math> <math>\forall i \in I,</math> <math>\forall j \in J</math><br />
<br />
<br />
The problem is solved in GAMS (General Algebraic Modeling System).<br />
<br />
If the factory is built in Denver, 300 tons/day of product go to Los Angeles and 100 tons/day go to Topeka, for a total profit of $36,300/day.<br />
<br />
If the factory is built in Seattle, 300 tons/day of product go to Los Angela, 100 tons/day of product go to Topeka, and 300 tons/day go to New York City, for a total profit of $56,500/day.<br />
<br />
If the factory is built in St. Louis, 100 tons/day of product go to Topeka and 500 tons/day go to New York City, for a total profit of $55,200/day.<br />
<br />
Therefore, to maximize profit, the factory should be built in Seattle.<br />
<br />
<br />
'''Approximate Solution'''<br />
<br />
<br />
This example can also be solved approximately through the branch and bound method. The tree diagram showing the optimization is shown below. <br />
<br />
[[File:Branch and bound.png|center|frame|Branch and bound approach]]<br />
As shown in the tree diagram, building factories in both Denver and St. Louis would yield the highest profit of $82,200/day. Unfortunately, the company only has enough capital to build one facility. As a result of this, the only acceptable values are those in which one value is "1" and two are "0". Based on this constraint, it is clear that the company should build the factory in Seattle, as shown in the exact solution above. However, this also yields valuable information if the company hopes to expand again in the near future, because building a factories in St. Louis and Denver is more profitable than building factories in Seattle and Denver or Seattle and St. Louis. Depending on company projections, it may be a better decision to build the first factory St. Louis and aim to build an additional factory in Denver as soon as possible. <br />
<br />
== Applications ==<br />
[[File:BadranElHaggarFacilityLocation.jpg|thumb|321x321px|Map of optimal collection stations in Port Said, Egypt<sup>(12)</sup>.]]<br />
Facility location problems are utilized in many industries to find the optimal placement of various facilities, including warehouses, power plants, public transportation terminals, polling locations, and cell towers, to maximize efficiency, impact, and profit. In more unique applications, extensive research has been done in applying FLPs to humanitarian efforts, such as identifying disaster management sites to maximize accessibility to healthcare and treatment<sup>(10)</sup>. A case study by researchers in Nigeria explored the application of mixed-integer FLPs in optimizing the locations of waste collection centers to provide sanitation services in crucial communities. More effective waste collection systems could combat unsanitary practices and environmental pollution, which are major concerns in many developing nations<sup>(11)</sup>. For example, Badran and El-Haggar proposed a solid waste management system for Port Said, Egypt, implementing a mixed-integer program to optimally place waste collection stations and minimize cost<sup>(12)</sup>. This program was formulated to select collection stations from a set of locations such that the sum of the fixed cost of opening collections stations, the operating costs of the collection stations, and the transportation costs from the collection stations to the composting plants is minimized. <br />
<br />
FLPs have also been used in clustering analysis, which involves partitioning a given set of elements (e.g. data points) into different groups based on the similarity of the elements. The elements can be placed into groups by identifying the locations of center points that effectively partition the set into clusters, based on the distances from the center points to each element<sup>(13)</sup>. For example, the <math>k</math>-median clustering problem can be formulated as a FLP that selects a set of <math>k</math> cluster centers to minimize the cost between each point and its closest center. The cost in this problem is represented as the Euclidean distance <math>d(i,j)</math> between a point <math>i</math> and a proposed cluster center <math>j</math>. The problem can be formulated as the following integer program, which selects <math>k</math> centers from a set of <math>N</math> points<sup>(13)</sup>. <br />
<br />
<math>\min\ \sum_{i=1}^N x_{ij}d(ij)</math> <br />
<br />
<math>s.t.\ \sum_{j=1}^Ny_j\leq k</math> <br />
<br />
<math>\quad \quad \sum_{j=1}^Nx_{ij}=1</math> <br />
<br />
<math>\quad \quad x_{ij}\leq y_j</math> <br />
<br />
<math>\quad \quad x_{ij}, y_j\in\{0,1\}</math> <br />
<br />
In this formulation, the binary variables <math>y_j</math> and <math>x_{ij}</math> represent whether <math>j</math> is used as a center point and whether <math>j</math> is the optimal center for <math>i</math>, respectively. The <math>k</math>-median problem is NP-hard and is commonly solved using approximation algorithms. One of the most effective algorithms to date, proposed by Byrka et al., has an approximation factor of 2.611<sup>(13)</sup>. <br />
<br />
== Conclusion ==<br />
The facility location problem is an important application of computational optimization. The uses of this optimization technique are far-reaching, and can be used to determine anything from where a family should live based on the location of their workplaces and school to where a Fortune 500 company should put a new manufacturing plant or distribution facility to maximize their return on investment. <br />
<br />
== References ==<br />
<br />
# Drezner, Z; Hamacher. H. W. (2004), ''Facility Location Applications and Theory''. New York, NY: Springer.<br />
# Francis, R. L.; Mirchandani, P. B. (1990), ''Discrete Location Theory''. New York, NY: Wiley.<br />
# Hansen, P., et al. (1985), [https://pubsonline.informs.org/doi/abs/10.1287/opre.33.6.1251 The Minisum and Minimax Location Problems Revisited.] ''Operations Research, 33'', 6, 1251-1265.<br />
# Vygen, J. (2005), ''Approximation Algorithms for Facility Location Problems''. Research Institute for Discrete Mathematics, University of Bonn.<br />
# Jain, K., et al. (2003), [https://dl.acm.org/doi/10.1145/950620.950621 A Greedy Facility Location Algorithm Analyzed Using Dual Fitting with Factor-Revealing LP.] ''Journal of the ACM, 50'', 6, 795-824.<br />
# Alenezy, E. J. (2020), [https://www.hindawi.com/journals/aor/2020/5239176/ Solving Capacitated Facility Location Problem Using Lagrangian Decomposition and Volume Algorithm.] ''Advances in Operations Research,'' ''2020'', 5239176, 2020.<br />
# Held, M.; Karp, R. M. (1970), [https://pubsonline.informs.org/doi/abs/10.1287/opre.18.6.1138 The Traveling-Salesman Problem and Minimum Spanning Trees.] ''Operations Research, 18,'' 6, 1138-1162.<br />
# Barahona, F.; Anbil, R. (2000), [https://link.springer.com/article/10.1007%2Fs101070050002 The Volume Algorithm: Producing Primal solutions with a Subgradient Method.] ''Mathematical Programming, 87,'' 3, 385–399.<br />
# Ceselli, A. (2003), [https://link.springer.com/article/10.1007/s10288-003-0023-5 Two Exact Algorithms for the Capacitated p-Median Problem.] ''Quarterly Journal of the Belgian, French and Italian Operations Research Societies, 4'', 1, 319-340.<br />
# Daskin, M. S.; Dean, L. K. (2004), [https://link.springer.com/chapter/10.1007/1-4020-8066-2_3 Location of Health Care Facilities.] ''Handbook of OR/MS in Health Care: A Handbook of Methods and Applications'', 43-76.<br />
# Adeleke, O. J.; Olukanni, D. O. (2020), [https://www.mdpi.com/2313-4321/5/2/10 Facility Location Problems: Models, Techniques, and Applications in Waste Management.] ''Recycling, 5'', 10.<br />
# Badran, M.F.; El-Haggar, S.M. (2006), [https://www.sciencedirect.com/science/article/abs/pii/S0956053X05001534 Optimization of Municipal Solid Waste Management in Port Said – Egypt.] ''Waste Management, 26'', 5, 534-545.<br />
# Meira, L. A. A., et al. (2017), [https://www.sciencedirect.com/science/article/abs/pii/S030439751630514X Clustering through Continuous Facility Location Problems.] ''Theoretical Computer Science, 657'', 137-145.<br />
# Balcik, B.; Beamon, B. M. (2008), [https://www.tandfonline.com/doi/full/10.1080/13675560701561789 Facility Location in Humanitarian Relief.] ''International Journal of Logistics Research and Applications, 11'', 101-121.<br />
# Eiselt, H. A.; Marianov, V. (2019), ''Contributions to Location Analysis''. Cham, Switzerland: Springer.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=27012020 Cornell Optimization Open Textbook Feedback2020-12-21T10:29:39Z<p>Wc593: /* Markov decision process */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Facility location problem]]==<br />
<br />
* Numerical Example<br />
*# Mention how the formulated problem is coded and solved. No need to provide GAMS code.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Integer linear program formulation & Approximation via LP relaxation and rounding<br />
*# Use proper math notations for “greater than equal to”.<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Column generation algorithms]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory, methodology and algorithmic discussions<br />
*# Some minor typos/article agreement issues exist “is not partical in real-world”.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Please use proper symbol for "greater than or equal to".<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Markov_decision_process&diff=2700Markov decision process2020-12-21T10:28:56Z<p>Wc593: /* Introduction */</p>
<hr />
<div>Author: Eric Berg (eb645), Fall 2020<br />
<br />
= Introduction =<br />
A Markov Decision Process (MDP) is a stochastic sequential decision making method.<math>^1</math> Sequential decision making is applicable any time there is a dynamic system that is controlled by a decision maker where decisions are made sequentially over time. MDPs can be used to determine what action the decision maker should make given the current state of the system and its environment. This decision making process takes into account information from the environment, actions performed by the agent, and rewards in order to decide the optimal next action. MDPs can be characterized as both finite or infinite and continuous or discrete depending on the set of actions and states available and the decision making frequency.<math>^1</math> This article will focus on discrete MDPs with finite states and finite actions for the sake of simplified calculations and numerical examples. The name Markov refers to the Russian mathematician Andrey Markov, since the MDP is based on the Markov Property. In the past, MDPs have been used to solve problems like inventory control, queuing optimization, and routing problems.<math>^2</math> Today, MDPs are often used as a method for decision making in the reinforcement learning applications, serving as the framework guiding the machine to make decisions and "learn" how to behave in order to achieve its goal.<br />
<br />
= Theory and Methodology =<br />
A MDP makes decisions using information about the system's current state, the actions being performed by the agent and the rewards earned based on states and actions.<br />
<br />
The MDP is made up of multiple fundamental elements: the agent, states, a model, actions, rewards, and a policy.<math>^1</math> The agent is the object or system being controlled that has to make decisions and perform actions. The agent lives in an environment that can be described using states, which contain information about the agent and the environment. The model determines the rules of the world in which the agent lives, in other words, how certain states and actions lead to other states. The agent can perform a fixed set of actions in any given state. The agent receives rewards based on its current state. A policy is a function that determines the agent's next action based on its current state. [[File:Reinforcement Learning.png|thumb|Reinforcement Learning framework used in Markov Decision Processes]]'''MDP Framework:'''<br />
<br />
*<math>S</math> : States (<math>s \epsilon S</math>)<br />
*<math>A</math> : Actions (<math>a \epsilon A</math>)<br />
*<math>P(S_{t+1} | s_t, a_t)</math> : Model determining transition probabilities<br />
*<math>R(s)</math>: Reward<br /><br />
In order to understand how the MDP works, first the Markov Property must be defined. The Markov Property states that the future is independent of the past given the present.<math>^4</math> In other words, only the present is needed to determine the future, since the present contains all necessary information from the past. The Markov Property can be described in mathematical terms below:<br />
<br />
<math display="inline">P[S_{t+1} | S_t] = P[S_{t+1} | S_1, S_2, S_3... S_t]</math><br />
<br />
The above notation conveys that the probability of the next state given the current state is equal to the probability of the next state given all previous states. The Markov Property is relevant to the MDP because only the current state is used to determine the next action, the previous states and actions are not needed. <br />
<br />
'''The Policy and Value Function'''<br />
<br />
The policy, <math>\Pi</math> , is a function that maps actions to states. The policy determines which is the optimal action given the current state to achieve the maximum total reward. <br />
<br />
<math>\Pi : S \rightarrow A </math><br />
<br />
Before the best policy can be determined, a goal or return must be defined to quantify rewards at every state. There are various ways to define the return. Each variation of the return function tries to maximize rewards in some way, but differs in which accumulation of rewards should be maximized. The first method is to choose the action that maximizes the expected reward given the current state. This is the myopic method, which weighs each time-step decision equally.<math>^2</math> Next is the finite-horizon method, which tries to maximize the accumulated reward over a fixed number of time steps.<math>^2</math> But because many applications may have infinite horizons, meaning the agent will always have to make decisions and continuously try to maximize its reward, another method is commonly used, known as the infinite-horizon method. In the infinite-horizon method, the goal is to maximize the expected sum of rewards over all steps in the future. <math>^2</math> When performing an infinite sum of rewards that are all weighed equally, the results may not converge and the policy algorithm may get stuck in a loop. In order to avoid this, and to be able prioritize short-term or long term-rewards, a discount factor, <math>\gamma <br />
</math>, is added. <math>^3</math> If <math>\gamma <br />
</math> is closer to 0, the policy will choose actions that prioritize more immediate rewards, if <math>\gamma <br />
</math> is closer to 1, long-term rewards are prioritized.<br />
<br />
Return/Goal Variations:<br />
<br />
* Myopic: Maximize <math>E[ r_t | \Pi , s_t ] <br />
</math> , maximize expected reward for each state<br />
* Finite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^k \displaystyle r_t | \Pi , s_t ] <br />
</math> , maximize sum of expected reward over finite horizon<br />
* Discounted Infinite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^\infty \displaystyle \gamma^t r_t | \Pi , s_t ] <br />
</math> <math>\gamma \epsilon [0,1] <br />
</math>, maximize sum of discounted expected reward over infinite horizon<br />
The value function, <math>V(s) <br />
</math>, characterizes the return at a given state. Most commonly, the discounted infinite horizon return method is used to determine the best policy. Below the value function is defined as the expected sum of discounted future rewards.<br />
<br />
<math>V(s) = E[ \sum_{t=0}^\infty \gamma^t r_t | s_t ] <br />
</math><br />
<br />
The value function can be decomposed into two parts, the immediate reward of the current state, and the discounted value of the next state. This decomposition leads to the derivation of the [[Bellman equation|Bellman Equation]],, as shown in equation (2). Because the actions and rewards are dependent on the policy, the value function of an MDP is associated with a given policy.<br />
<br />
<math>V(s) = E[ r_{t+1} + \gamma V(s_{t+1}) | s_t] <br />
</math> , <math>s_{t+1} = s' <br />
</math><br />
<br />
<math>V(s) = R(s) + \gamma \sum_{s' \epsilon S}P_{ss'}V(s') <br />
</math><br />
<br />
<math>V^{\Pi}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V(s') <br />
</math> (1)<br />
<br />
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')] <br />
</math> (2)<br />
<br />
The optimal value function can be solved iteratively using iterative methods such as dynamic programming, Monte-Carlo evaluations, or temporal-difference learning.<math>^5</math> <br />
<br />
The optimal policy is one that chooses the action with the largest optimal value given the current state:<br />
<br />
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] <br />
</math> (3)<br />
<br />
The policy is a function of the current state, meaning at each time step a new policy is calculated considering the present information. The optimal policy function can be solved using methods such as value iteration, policy iteration, Q-learning, or linear programming. <math>^{5,6}</math><br />
<br />
'''Algorithms'''<br />
<br />
The first method for solving the optimality equation (2) is using value iteration, also known as successive approximation, backwards induction, or dynamic programming. <math>^{1,6}</math><br />
<br />
Value Iteration Algorithm:<br />
<br />
# Initialization: Set <math>V^{*}_0(s) = 0 <br />
</math> for all <math>s \epsilon S</math> , choose <math>\varepsilon >0 <br />
</math>, n=1<br />
# Value Update: For each <math>s \epsilon S</math>, compute: <math>V^{*}_{n+1}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*_n(s')] <br />
</math><br />
# If <math>| V_{n+1} - V_n | < \varepsilon <br />
</math>, the algorithm has converged and the optimal value function, <math>V^* <br />
</math>, has been determined, otherwise return to step 2 and increment n by 1.<br />
The value function approximation becomes more accurate at each iteration because more future states are considered. The value iteration algorithm can be slow to converge in certain situations, so an alternative algorithm can be used which converges more quickly.<br />
<br />
Policy Iteration Algorithm:<br />
<br />
# Initialization: Set an arbitrary policy <math>\Pi(s) <br />
</math> and <math>V(s) <br />
</math> for all <math>s \epsilon S</math>, choose <math>\varepsilon >0 <br />
</math>, n=1<br />
# Policy Evaluation: For each <math>s \epsilon S</math>, compute: <math>V^{\Pi}_{n+1}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V^{\Pi}_n(s') <br />
</math><br />
# If <math>| V_{n+1} - V_n | < \varepsilon <br />
</math>, the optimal value function, <math>V^* <br />
</math> has been determined, continue to next step, otherwise return to step 2 and increment n by 1.<br />
# Policy Update: For each <math>s \epsilon S</math>, compute: <math>\Pi_{n+1}(s) = argmax_a [R(s,\Pi_n(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi_n(s))V^{\Pi}_n(s')] <br />
</math><br />
# If <math>\Pi_{n+1} = \Pi_n <br />
</math> ,the algorithm has converged and the optimal policy, <math>\Pi^* <br />
</math> has been determined, otherwise return to step 2 and increment n by 1.<br />
<br />
With each iteration the optimal policy is improved using the previous policy and value function until the algorithm converges and the optimal policy is found.<br />
<br />
= Numerical Example =<br />
[[File:Markov Decision Process Example 2.png|alt=|thumb|499x499px|A Markov Decision Process describing a college student's hypothetical situation.]]<br />
As an example, the MDP can be applied to a college student, depicted to the right. In this case, the agent would be the student. The states would be the circles and squares in the diagram, and the arrows would be the actions. The action between work and school is leave work and go to school. In the state that the student is at school, the allowable actions are to go to the bar, enjoy their hobby, or sleep. The probabilities assigned to each state given the previous state and action in this example is 1. The rewards associated with each state are written in red.<br />
<br />
Assume <math>P(s'|s) = 1.0<br />
<br />
</math> , <math>\gamma <br />
</math> =1.<br />
<br />
First, the optimal value functions must be calculated for each state.<br />
<br />
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')] <br />
</math><br />
<br />
<math>V^{*}(Hobby) = max_a [3 + (1)(1.0*0)] = 3 <br />
</math><br />
<br />
<math>V^{*}(Bar) = max_a [2 + 1(1.0*0)] = 2 <br />
</math> <br />
<br />
<math>V^*(Sleep) = max_a[0 + 1(1.0*0)] = 0 <br />
</math><br />
<br />
<math>V^*(School) = max_a[ -2 + 1(1.0*2) , -2 + 1(1.0*0) , -2 + 1(1.0*3)] = 1 <br />
</math><br />
<br />
<math>V^*(YouTube) = max_a[-1 + 1(1.0*-1) , -1 +1(1.0*1)]= 0 <br />
</math><br />
<br />
<math>V^*(Work) = max_a[1 + 1(1.0*0) , 1 + 1(1.0*1)] = 2 <br />
</math><br />
<br />
Then, the optimal policy at each state will choose the action that generates the highest value function.<br />
<br />
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] <br />
</math><br />
<br />
<math>\Pi^*(YouTube) = argmax_a [0,2] \rightarrow a = <br />
</math> Work<br />
<br />
<math>\Pi^*(Work) = argmax_a [0,1] \rightarrow a = <br />
</math> School<br />
<br />
<math>\Pi^*(School) = argmax_a [0,2,3] \rightarrow a = <br />
</math> Hobby<br />
<br />
Therefore, the optimal policy in each state provides a sequence of decisions that generates the optimal path sequence in this decision process. As a results, if the student starts in state Work, he/she should choose to go to school, then to enjoy their hobby, then go to sleep.<br />
<br />
= Applications =<br />
[[File:Pong.jpg|thumb|Computer playing Pong arcade game by Atari using reinforcement learning]]<br />
MDPs have been applied in various fields including operations research, electrical engineering, computer science, manufacturing, economics, finance, and telecommunication.<math>^2</math> For example, the sequential decision making process described by MDP can be used to solve routing problems such as the [[Traveling salesman problem]]. In this case, the agent is the salesman, the actions available are the routes available to take from the current state, the rewards in this case are the costs of taking each route, and the goal is to determine the optimal policy that minimizes the cost function over the duration of the trip. Another application example is maintenance and repair problems, in which a dynamic system such as a vehicle will deteriorate over time due to its actions and the environment, and the available decisions at every time epoch is to do nothing, repair, or replace a certain component of the system.<math>^2</math> This problem can be formulated as an MDP to choose the actions that to minimize cost of maintenance over the life of the vehicle. MDPs have also been applied to optimize telecommunication protocols, stock trading, and queue control in manufacturing environments. <math>^2</math> <br />
<br />
Given the significant advancements in artificial intelligence and machine learning over the past decade, MDPs are being applied in fields such as robotics, automated systems, autonomous vehicles, and other complex autonomous systems. MDPs have been used widely within reinforcement learning to teach robots or other computer-based systems how to do something they were previously were unable to do. For example, MDPs have been used to teach a computer how to play computer games like Pong, Pacman, or AlphaGo.<math>^{7,8}</math> DeepMind Technologies, owned by Google, used the MDP framework in conjunction with neural networks to play Atari games better than human experts. <math>^7</math> In this application, only the raw pixel input of the game screen was used as input, and a neural network was used to estimate the value function for each state, and choose the next action.<math>^7</math> MDPs have been used in more advanced applications to teach a simulated human robot how to walk and run and a real legged-robot how to walk.<math>^9</math> <br />
[[File:Google Deepmind.jpg|thumb|Google's DeepMind uses reinforcement learning to teach AI how to walk]]<br />
<br />
= Conclusion =<br />
<br />
A MDP is a stochastic, sequential decision-making method based on the Markov Property. MDPs can be used to make optimal decisions for a dynamic system given information about its current state and its environment. This process is fundamental in reinforcement learning applications and a core method for developing artificially intelligent systems. MDPs have been applied to a wide variety of industries and fields including robotics, operations research, manufacturing, economics, and finance.<br />
<br />
= References =<br />
<br />
<references /><br />
<br />
# Puterman, M. L. (1990). Chapter 8 Markov decision processes. In ''Handbooks in Operations Research and Management Science'' (Vol. 2, pp. 331–434). Elsevier. <nowiki>https://doi.org/10.1016/S0927-0507(05)80172-0</nowiki><br />
# Feinberg, E. A., & Shwartz, A. (2012). ''Handbook of Markov Decision Processes: Methods and Applications''. Springer Science & Business Media.<br />
# Howard, R. A. (1960). ''Dynamic programming and Markov processes.'' John Wiley.<br />
# Ashraf, M. (2018, April 11). ''Reinforcement Learning Demystified: Markov Decision Processes (Part 1)''. Medium. <nowiki>https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690</nowiki><br />
# Bertsekas, D. P. (2011). Dynamic Programming and Optimal Control 3rd Edition, Volume II. ''Massachusetts Institue of Technology'', 233.<br />
# Littman, M. L. (2001). Markov Decision Processes. In N. J. Smelser & P. B. Baltes (Eds.), ''International Encyclopedia of the Social & Behavioral Sciences'' (pp. 9240–9242). Pergamon. <nowiki>https://doi.org/10.1016/B0-08-043076-7/00614-8</nowiki><br />
# Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning. ''ArXiv:1312.5602 [Cs]''. <nowiki>http://arxiv.org/abs/1312.5602</nowiki><br />
# Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. ''Science'', ''362''(6419), 1140–1144. <nowiki>https://doi.org/10.1126/science.aar6404</nowiki><br />
# Ha, S., Xu, P., Tan, Z., Levine, S., & Tan, J. (2020). Learning to Walk in the Real World with Minimal Human Effort. ''ArXiv:2002.08550 [Cs]''. <nowiki>http://arxiv.org/abs/2002.08550</nowiki><br />
# Bellman, R. (1966). Dynamic Programming. ''Science'', ''153''(3731), 34–37. <nowiki>https://doi.org/10.1126/science.153.3731.34</nowiki><br />
# Abbeel, P. (2016). ''Markov Decision Processes and Exact Solution Methods:'' 34.<br />
# Silver, D. (2015). Markov Decision Processes. ''Markov Processes'', 57.<br />
<span title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lecture%202%3A%20Markov%20Decision%20Processes&rft.jtitle=Markov%20Processes&rft.aufirst=David&rft.aulast=Silver&rft.au=David%20Silver&rft.pages=57&rft.language=en" class="Z3988"></span></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Markov_decision_process&diff=2699Markov decision process2020-12-21T10:26:26Z<p>Wc593: </p>
<hr />
<div>Author: Eric Berg (eb645), Fall 2020<br />
<br />
= Introduction =<br />
A Markov Decision Process (MDP) is a stochastic sequential decision making method.<math>^1</math> Sequential decision making is applicable any time there is a dynamic system that is controlled by a decision maker where decisions are made sequentially over time. MDPs can be used to determine what action the decision maker should make given the current state of the system and its environment. This decision making process takes into account information from the environment, actions performed by the agent, and rewards in order to decide the optimal next action. MDPs can be characterized as both finite or infinite and continuous or discrete depending on the set of actions and states available and the decision making frequency.<math>^1</math> This article will focus on discreet MDPs with finite states and finite actions for the sake of simplified calculations and numerical examples. The name Markov refers to the Russian mathematician Andrey Markov, since the MDP is based on the Markov Property. In the past, MDPs have been used to solve problems like inventory control, queuing optimization, and routing problems.<math>^2</math> Today, MDPs are often used as a method for decision making in the reinforcement learning applications, serving as the framework guiding the machine to make decisions and "learn" how to behave in order to achieve its goal.<br />
<br />
= Theory and Methodology =<br />
A MDP makes decisions using information about the system's current state, the actions being performed by the agent and the rewards earned based on states and actions.<br />
<br />
The MDP is made up of multiple fundamental elements: the agent, states, a model, actions, rewards, and a policy.<math>^1</math> The agent is the object or system being controlled that has to make decisions and perform actions. The agent lives in an environment that can be described using states, which contain information about the agent and the environment. The model determines the rules of the world in which the agent lives, in other words, how certain states and actions lead to other states. The agent can perform a fixed set of actions in any given state. The agent receives rewards based on its current state. A policy is a function that determines the agent's next action based on its current state. [[File:Reinforcement Learning.png|thumb|Reinforcement Learning framework used in Markov Decision Processes]]'''MDP Framework:'''<br />
<br />
*<math>S</math> : States (<math>s \epsilon S</math>)<br />
*<math>A</math> : Actions (<math>a \epsilon A</math>)<br />
*<math>P(S_{t+1} | s_t, a_t)</math> : Model determining transition probabilities<br />
*<math>R(s)</math>: Reward<br /><br />
In order to understand how the MDP works, first the Markov Property must be defined. The Markov Property states that the future is independent of the past given the present.<math>^4</math> In other words, only the present is needed to determine the future, since the present contains all necessary information from the past. The Markov Property can be described in mathematical terms below:<br />
<br />
<math display="inline">P[S_{t+1} | S_t] = P[S_{t+1} | S_1, S_2, S_3... S_t]</math><br />
<br />
The above notation conveys that the probability of the next state given the current state is equal to the probability of the next state given all previous states. The Markov Property is relevant to the MDP because only the current state is used to determine the next action, the previous states and actions are not needed. <br />
<br />
'''The Policy and Value Function'''<br />
<br />
The policy, <math>\Pi</math> , is a function that maps actions to states. The policy determines which is the optimal action given the current state to achieve the maximum total reward. <br />
<br />
<math>\Pi : S \rightarrow A </math><br />
<br />
Before the best policy can be determined, a goal or return must be defined to quantify rewards at every state. There are various ways to define the return. Each variation of the return function tries to maximize rewards in some way, but differs in which accumulation of rewards should be maximized. The first method is to choose the action that maximizes the expected reward given the current state. This is the myopic method, which weighs each time-step decision equally.<math>^2</math> Next is the finite-horizon method, which tries to maximize the accumulated reward over a fixed number of time steps.<math>^2</math> But because many applications may have infinite horizons, meaning the agent will always have to make decisions and continuously try to maximize its reward, another method is commonly used, known as the infinite-horizon method. In the infinite-horizon method, the goal is to maximize the expected sum of rewards over all steps in the future. <math>^2</math> When performing an infinite sum of rewards that are all weighed equally, the results may not converge and the policy algorithm may get stuck in a loop. In order to avoid this, and to be able prioritize short-term or long term-rewards, a discount factor, <math>\gamma <br />
</math>, is added. <math>^3</math> If <math>\gamma <br />
</math> is closer to 0, the policy will choose actions that prioritize more immediate rewards, if <math>\gamma <br />
</math> is closer to 1, long-term rewards are prioritized.<br />
<br />
Return/Goal Variations:<br />
<br />
* Myopic: Maximize <math>E[ r_t | \Pi , s_t ] <br />
</math> , maximize expected reward for each state<br />
* Finite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^k \displaystyle r_t | \Pi , s_t ] <br />
</math> , maximize sum of expected reward over finite horizon<br />
* Discounted Infinite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^\infty \displaystyle \gamma^t r_t | \Pi , s_t ] <br />
</math> <math>\gamma \epsilon [0,1] <br />
</math>, maximize sum of discounted expected reward over infinite horizon<br />
The value function, <math>V(s) <br />
</math>, characterizes the return at a given state. Most commonly, the discounted infinite horizon return method is used to determine the best policy. Below the value function is defined as the expected sum of discounted future rewards.<br />
<br />
<math>V(s) = E[ \sum_{t=0}^\infty \gamma^t r_t | s_t ] <br />
</math><br />
<br />
The value function can be decomposed into two parts, the immediate reward of the current state, and the discounted value of the next state. This decomposition leads to the derivation of the [[Bellman equation|Bellman Equation]],, as shown in equation (2). Because the actions and rewards are dependent on the policy, the value function of an MDP is associated with a given policy.<br />
<br />
<math>V(s) = E[ r_{t+1} + \gamma V(s_{t+1}) | s_t] <br />
</math> , <math>s_{t+1} = s' <br />
</math><br />
<br />
<math>V(s) = R(s) + \gamma \sum_{s' \epsilon S}P_{ss'}V(s') <br />
</math><br />
<br />
<math>V^{\Pi}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V(s') <br />
</math> (1)<br />
<br />
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')] <br />
</math> (2)<br />
<br />
The optimal value function can be solved iteratively using iterative methods such as dynamic programming, Monte-Carlo evaluations, or temporal-difference learning.<math>^5</math> <br />
<br />
The optimal policy is one that chooses the action with the largest optimal value given the current state:<br />
<br />
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] <br />
</math> (3)<br />
<br />
The policy is a function of the current state, meaning at each time step a new policy is calculated considering the present information. The optimal policy function can be solved using methods such as value iteration, policy iteration, Q-learning, or linear programming. <math>^{5,6}</math><br />
<br />
'''Algorithms'''<br />
<br />
The first method for solving the optimality equation (2) is using value iteration, also known as successive approximation, backwards induction, or dynamic programming. <math>^{1,6}</math><br />
<br />
Value Iteration Algorithm:<br />
<br />
# Initialization: Set <math>V^{*}_0(s) = 0 <br />
</math> for all <math>s \epsilon S</math> , choose <math>\varepsilon >0 <br />
</math>, n=1<br />
# Value Update: For each <math>s \epsilon S</math>, compute: <math>V^{*}_{n+1}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*_n(s')] <br />
</math><br />
# If <math>| V_{n+1} - V_n | < \varepsilon <br />
</math>, the algorithm has converged and the optimal value function, <math>V^* <br />
</math>, has been determined, otherwise return to step 2 and increment n by 1.<br />
The value function approximation becomes more accurate at each iteration because more future states are considered. The value iteration algorithm can be slow to converge in certain situations, so an alternative algorithm can be used which converges more quickly.<br />
<br />
Policy Iteration Algorithm:<br />
<br />
# Initialization: Set an arbitrary policy <math>\Pi(s) <br />
</math> and <math>V(s) <br />
</math> for all <math>s \epsilon S</math>, choose <math>\varepsilon >0 <br />
</math>, n=1<br />
# Policy Evaluation: For each <math>s \epsilon S</math>, compute: <math>V^{\Pi}_{n+1}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V^{\Pi}_n(s') <br />
</math><br />
# If <math>| V_{n+1} - V_n | < \varepsilon <br />
</math>, the optimal value function, <math>V^* <br />
</math> has been determined, continue to next step, otherwise return to step 2 and increment n by 1.<br />
# Policy Update: For each <math>s \epsilon S</math>, compute: <math>\Pi_{n+1}(s) = argmax_a [R(s,\Pi_n(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi_n(s))V^{\Pi}_n(s')] <br />
</math><br />
# If <math>\Pi_{n+1} = \Pi_n <br />
</math> ,the algorithm has converged and the optimal policy, <math>\Pi^* <br />
</math> has been determined, otherwise return to step 2 and increment n by 1.<br />
<br />
With each iteration the optimal policy is improved using the previous policy and value function until the algorithm converges and the optimal policy is found.<br />
<br />
= Numerical Example =<br />
[[File:Markov Decision Process Example 2.png|alt=|thumb|499x499px|A Markov Decision Process describing a college student's hypothetical situation.]]<br />
As an example, the MDP can be applied to a college student, depicted to the right. In this case, the agent would be the student. The states would be the circles and squares in the diagram, and the arrows would be the actions. The action between work and school is leave work and go to school. In the state that the student is at school, the allowable actions are to go to the bar, enjoy their hobby, or sleep. The probabilities assigned to each state given the previous state and action in this example is 1. The rewards associated with each state are written in red.<br />
<br />
Assume <math>P(s'|s) = 1.0<br />
<br />
</math> , <math>\gamma <br />
</math> =1.<br />
<br />
First, the optimal value functions must be calculated for each state.<br />
<br />
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')] <br />
</math><br />
<br />
<math>V^{*}(Hobby) = max_a [3 + (1)(1.0*0)] = 3 <br />
</math><br />
<br />
<math>V^{*}(Bar) = max_a [2 + 1(1.0*0)] = 2 <br />
</math> <br />
<br />
<math>V^*(Sleep) = max_a[0 + 1(1.0*0)] = 0 <br />
</math><br />
<br />
<math>V^*(School) = max_a[ -2 + 1(1.0*2) , -2 + 1(1.0*0) , -2 + 1(1.0*3)] = 1 <br />
</math><br />
<br />
<math>V^*(YouTube) = max_a[-1 + 1(1.0*-1) , -1 +1(1.0*1)]= 0 <br />
</math><br />
<br />
<math>V^*(Work) = max_a[1 + 1(1.0*0) , 1 + 1(1.0*1)] = 2 <br />
</math><br />
<br />
Then, the optimal policy at each state will choose the action that generates the highest value function.<br />
<br />
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] <br />
</math><br />
<br />
<math>\Pi^*(YouTube) = argmax_a [0,2] \rightarrow a = <br />
</math> Work<br />
<br />
<math>\Pi^*(Work) = argmax_a [0,1] \rightarrow a = <br />
</math> School<br />
<br />
<math>\Pi^*(School) = argmax_a [0,2,3] \rightarrow a = <br />
</math> Hobby<br />
<br />
Therefore, the optimal policy in each state provides a sequence of decisions that generates the optimal path sequence in this decision process. As a results, if the student starts in state Work, he/she should choose to go to school, then to enjoy their hobby, then go to sleep.<br />
<br />
= Applications =<br />
[[File:Pong.jpg|thumb|Computer playing Pong arcade game by Atari using reinforcement learning]]<br />
MDPs have been applied in various fields including operations research, electrical engineering, computer science, manufacturing, economics, finance, and telecommunication.<math>^2</math> For example, the sequential decision making process described by MDP can be used to solve routing problems such as the [[Traveling salesman problem]]. In this case, the agent is the salesman, the actions available are the routes available to take from the current state, the rewards in this case are the costs of taking each route, and the goal is to determine the optimal policy that minimizes the cost function over the duration of the trip. Another application example is maintenance and repair problems, in which a dynamic system such as a vehicle will deteriorate over time due to its actions and the environment, and the available decisions at every time epoch is to do nothing, repair, or replace a certain component of the system.<math>^2</math> This problem can be formulated as an MDP to choose the actions that to minimize cost of maintenance over the life of the vehicle. MDPs have also been applied to optimize telecommunication protocols, stock trading, and queue control in manufacturing environments. <math>^2</math> <br />
<br />
Given the significant advancements in artificial intelligence and machine learning over the past decade, MDPs are being applied in fields such as robotics, automated systems, autonomous vehicles, and other complex autonomous systems. MDPs have been used widely within reinforcement learning to teach robots or other computer-based systems how to do something they were previously were unable to do. For example, MDPs have been used to teach a computer how to play computer games like Pong, Pacman, or AlphaGo.<math>^{7,8}</math> DeepMind Technologies, owned by Google, used the MDP framework in conjunction with neural networks to play Atari games better than human experts. <math>^7</math> In this application, only the raw pixel input of the game screen was used as input, and a neural network was used to estimate the value function for each state, and choose the next action.<math>^7</math> MDPs have been used in more advanced applications to teach a simulated human robot how to walk and run and a real legged-robot how to walk.<math>^9</math> <br />
[[File:Google Deepmind.jpg|thumb|Google's DeepMind uses reinforcement learning to teach AI how to walk]]<br />
<br />
= Conclusion =<br />
<br />
A MDP is a stochastic, sequential decision-making method based on the Markov Property. MDPs can be used to make optimal decisions for a dynamic system given information about its current state and its environment. This process is fundamental in reinforcement learning applications and a core method for developing artificially intelligent systems. MDPs have been applied to a wide variety of industries and fields including robotics, operations research, manufacturing, economics, and finance.<br />
<br />
= References =<br />
<br />
<references /><br />
<br />
# Puterman, M. L. (1990). Chapter 8 Markov decision processes. In ''Handbooks in Operations Research and Management Science'' (Vol. 2, pp. 331–434). Elsevier. <nowiki>https://doi.org/10.1016/S0927-0507(05)80172-0</nowiki><br />
# Feinberg, E. A., & Shwartz, A. (2012). ''Handbook of Markov Decision Processes: Methods and Applications''. Springer Science & Business Media.<br />
# Howard, R. A. (1960). ''Dynamic programming and Markov processes.'' John Wiley.<br />
# Ashraf, M. (2018, April 11). ''Reinforcement Learning Demystified: Markov Decision Processes (Part 1)''. Medium. <nowiki>https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690</nowiki><br />
# Bertsekas, D. P. (2011). Dynamic Programming and Optimal Control 3rd Edition, Volume II. ''Massachusetts Institue of Technology'', 233.<br />
# Littman, M. L. (2001). Markov Decision Processes. In N. J. Smelser & P. B. Baltes (Eds.), ''International Encyclopedia of the Social & Behavioral Sciences'' (pp. 9240–9242). Pergamon. <nowiki>https://doi.org/10.1016/B0-08-043076-7/00614-8</nowiki><br />
# Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning. ''ArXiv:1312.5602 [Cs]''. <nowiki>http://arxiv.org/abs/1312.5602</nowiki><br />
# Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. ''Science'', ''362''(6419), 1140–1144. <nowiki>https://doi.org/10.1126/science.aar6404</nowiki><br />
# Ha, S., Xu, P., Tan, Z., Levine, S., & Tan, J. (2020). Learning to Walk in the Real World with Minimal Human Effort. ''ArXiv:2002.08550 [Cs]''. <nowiki>http://arxiv.org/abs/2002.08550</nowiki><br />
# Bellman, R. (1966). Dynamic Programming. ''Science'', ''153''(3731), 34–37. <nowiki>https://doi.org/10.1126/science.153.3731.34</nowiki><br />
# Abbeel, P. (2016). ''Markov Decision Processes and Exact Solution Methods:'' 34.<br />
# Silver, D. (2015). Markov Decision Processes. ''Markov Processes'', 57.<br />
<span title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lecture%202%3A%20Markov%20Decision%20Processes&rft.jtitle=Markov%20Processes&rft.aufirst=David&rft.aulast=Silver&rft.au=David%20Silver&rft.pages=57&rft.language=en" class="Z3988"></span></div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Quasi-Newton_methods&diff=2698Quasi-Newton methods2020-12-21T10:21:57Z<p>Wc593: </p>
<hr />
<div>Author: Jianmin Su (ChemE 6800 Fall 2020)<br />
<br />
'''Quasi-Newton Methods''' are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative to Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.<br />
<br />
== Introduction ==<br />
The first quasi-Newton algorithm was developed by [[Wikipedia: William_C._Davidon|W.C. Davidon]] in the mid1950s and it turned out to be a milestone in nonlinear optimization problems. He was trying to solve a long optimization calculation but he failed to get the result with the original method due to the low performance of computers. Thus he managed to build the quasi-Newton method to solve it. Later, Fletcher and Powell proved that the new algorithm was more efficient and more reliable than the other existing methods. <br />
<br />
During the following years, numerous variants were proposed, include '''[https://en.wikipedia.org/wiki/Broyden%27s_method Broyden's method]''' (1965), the '''[https://en.wikipedia.org/wiki/Symmetric_rank-one SR1 formula]''' (Davidon 1959, Broyden 1967), the '''[https://en.wikipedia.org/wiki/Davidon%E2%80%93Fletcher%E2%80%93Powell_formula DFP method]''' (Davidon, 1959; Fletcher and Powell, 1963), and the '''[https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm BFGS method]''' (Broyden, 1969; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970)<ref>Hennig, Philipp, and Martin Kiefel. "Quasi-Newton method: A new direction." Journal of Machine Learning Research 14.Mar (2013): 843-865.</ref>.<br />
<br />
In optimization problems, Newton's method uses first and second derivatives, gradient and the Hessian in multivariate scenarios, to find the optimal point, it is applied to a twice-differentiable function <math>f(x)</math> to find the roots of the first derivative (solutions to <math>f'(x)=0</math>), also known as the stationary points of <math>f(x)</math><ref>''Newton’s Method'', 8.Dec (2020). Retrieved from: https://en.wikipedia.org/wiki/Quasi-Newton_method</ref>. <br />
<br />
The iteration of Newton's method is usually written as: <math>x_{k+1}=x_k-H^{-1}\cdot\bigtriangledown f(x_k) </math>, where <math>k </math> is the iteration number, <math>H</math> is the Hessian matrix and <math>H=[\bigtriangledown ^2 f(x_k)]</math><br />
<br />
Iteraton would stop when it satisfies the convergence criteria like <math>{df \over dx}=0, ||\bigtriangledown f(x)||<\epsilon \text{ or } |f(x_{k+1})-f(x_k)|<\epsilon </math><br />
<br />
Though we can solve an optimization problem quickly with Newton's method, it has two obvious disadvantages:<br />
<br />
# The objective function must be twice-differentiable, and the Hessian matrix must be positive definite.<br />
# The calculation is costly because it requires to compute the Jacobian matrix, Hessian matrix, and its inverse, which is time-consuming when dealing with a large-scale optimization problem. <br />
<br />
However, we can use Quasi-Newton methods to avoid these two disadvantages.<br />
<br />
Quasi-Newton methods are similar to Newton's method, but with one key idea that is different, they don't calculate the Hessian matrix. They introduce a matrix '''<math>B</math>''' to estimate the Hessian matrix instead so that they can avoid the time-consuming calculations of the Hessian matrix and its inverse. And there are many variants of quasi-Newton methods that simply depend on the exact methods they use to estimate the Hessian matrix.<br />
<br />
== Theory and Algorithm ==<br />
To illustrate the basic idea behind quasi-Newton methods, we start with building a quadratic model of the objective function at the current iterate <math>x_k</math>:<br />
<br />
<math>m_k(p)=f_k(p)+\bigtriangledown f_k^T(p)+\frac{1}{2}p^TB_kp</math> (1.1), <br />
<br />
where <math>B_k </math> is an <math>n\times n </math> symmetric positive definite matrix that will be updated at every iteration.<br />
<br />
The minimizer of this convex quadratic model is:<br />
<br />
<math>p_k=-B_k^{-1}\bigtriangledown f_k </math> (1.2), <br />
<br />
which is also used as the search direction.<br />
<br />
Then the new iterate could be written as: <math> x_{k+1}=x_{k}+\alpha _kp_k</math> (1.3), <br />
<br />
where <math>\alpha _k<br />
</math> is the step length that should satisfy the Wolfe conditions. The iteration is similar to Newton's method, but we use the approximate Hessian <math>B_{k}</math> instead of the true Hessian.<br />
<br />
To maintain the curve information we got from the previous iteration in <math>B_{k+1}</math>, we generate a new iterate <math>x_{k+1}</math> and new quadratic modelto in the form of:<br />
<br />
<math>m_{k+1}(p)=f_{k+1}+\bigtriangledown f_{k+1}^Tp+\frac{1}{2}p^TB_{k+1}p</math> (1.4).<br />
<br />
To construct the relationship between 1.1 and 1.4, we require that in 1.1 at <math>p=0</math> the function value and gradient match <math>f_k</math> and <math>\bigtriangledown f_k</math>, and the gradient of <math>m_{k+1}</math>should match the gradient of the objective function at the latest two iterates <math>x_k</math>and <math>x_{k+1}</math>, then we can get:<br />
<br />
<math>\bigtriangledown m_{k+1}(-\alpha _kp_k)=\bigtriangledown f_{k+1}-\alpha _kB_{k+1}p_k=\bigtriangledown f_k </math> (1.5) <br />
<br />
and with some arrangements:<br />
<br />
<math>B_{k+1}\alpha _k p_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k </math> (1.6)<br />
<br />
Define: <br />
<br />
<math> s_k=x_{k+1}-x_k</math>, <math> y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math> (1.7)<br />
<br />
So that (1.6) becomes: <math>B_{k+1}s_k=y_k </math> (1.8), which is the '''secant equation.'''<br />
<br />
To make sure <math>B_{k+1}</math> is still a symmetric positive definite matrix, we need <math>s_k^Ts_k>0</math> (1.9).<br />
<br />
To further preserve properties of <math>B_{k+1}</math> and determine <math>B_{k+1}</math> uniquely, we assume that among all symmetric matrices satisfying secant equation, <math> B_{k+1}</math> is closest to the current matrix <math> B_k</math>, which leads to a minimization problem: <br />
<br />
<math>B_{k+1}=\underset{B}{min}||B-B_k|| </math> (1.10)<br />
s.t. <math> B=B^T</math>, <math> Bs_k=y_k</math>, <br />
<br />
where <math> s_k</math> and <math> y_k</math> satisfy (1.9) and <math> B_k</math> is symmetric and positive definite.<br />
<br />
Different matrix norms applied in (1.10) results in different quasi-Newton methods. The weighted Frobenius norm can help us get an easy solution to the minimization problem: <math> ||A||_W=||W^\frac{1}{2}AW^\frac{1}{2}|| _F</math> (1.11).<br />
<br />
The weighted matrix <math> W</math> can be any matrix that satisfies the relation <math> Wy_k=s_k</math>.<br />
<br />
We skip procedures of solving the minimization problem (1.10) and here is the unique solution of (1.10):<br />
<br />
<math> B_{k+1}=(I-\rho y_ks_k^T)B_k(I-\rho s_ky_k^T)+\rho y_ky_k^T</math> (1.12)<br />
<br />
where <math>\rho=\frac{1}{y_k^Ts_k}</math> (1.13)<br />
<br />
Finally, we get the updated <math>B_{k+1}</math>. However, according to (1.2) and (1.3), we also need the inverse of <math>B_{k+1}</math> in next iterate.<br />
<br />
To get the inverse of <math>B_{k+1}</math>, we can apply the Sherman-Morrison formula to avoid complicated calculation of inverse. <br />
<br />
Set <math>M_k=B_k^{-1} </math>, with Sherman-Morrison formula we can get:<br />
<br />
<math>M_{k+1}=M_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{M_k y_k y_k^T M_k}{y_k^T M_k y_k} </math> (1.14)<br />
<br />
With the derivation<ref>Nocedal, Jorge, and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.</ref> above, we can now understand how do quasi-Newton methods get rid of calculating the Hessian matrix and its inverse. We can directly estimate the inverse of Hessian, and we can use (1.14) to update the approximation of the inverse of Hessian, which leads to the DFP method, or we can directly estimate the Hessian matrix, and this is the main idea in the BFGS method.<br />
<br />
<br />
=== DFP method ===<br />
<br />
The DFP method, which is also known as the Davidon–Fletcher–Powell formula, is named after W.C. Davidon, Roger Fletcher, and Michael J.D. Powell. It was proposed by Davidon in 1959 first and then improved by Fletched and Powell. DFP method uses an <math>n\times n </math> symmetric positive definite matrix <math>B_k </math> to estimate the inverse of Hessian matrix and its algorithm is shown below<ref>''Davidon–Fletcher–Powell formula'', 7.June (2020). Retrieved from: https://en.wikipedia.org/wiki/Davidon%E2%80%93Fletcher%E2%80%93Powell_formula</ref>.<br />
<br />
==== DFP Algorithm ====<br />
<br />
To avoid confusion, we use <math>D</math> to represent the approximation of the inverse of the Hessian matrix.<br />
<br />
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>; the initial estimation of inverse Hessian matrix <math>D_0=I</math>; <math>k=0</math>.<br />
# Compute the search direction <math>d_k=-D_k\cdot \bigtriangledown f_k</math>.<br />
# Compute the step length <math>\lambda_k</math> with a line search procedure that satisfies Wolfe conditions. And then set <br /> <math>s_k={\lambda}_k d_k</math>, <br /> <math>x_{k+1}=x_k+s_k</math><br />
# If <math>||\bigtriangledown f_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.<br />
# Computing <math>y_k=g_{k+1}-g_k</math>.<br />
# Update the <math>D_{k+1}</math> with <br /> <math>D_{k+1}=D_k+\frac{s_k s_k^T}{s_k^T y_k}-\frac{D_k y_k y_k^T D_k}{y_k^T D_k y_k} </math><br />
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.<br />
<br />
<br />
=== BFGS method ===<br />
<br />
BFGS method is named for its four discoverers Broyden, Fletcher, Goldfarb, and Shanno. It is considered the most effective quasi-Newton algorithm. Unlike the DFP method, the BFGS method uses an <math>n\times n </math> symmetric positive definite matrix <math>B_k </math> to estimate the Hessian matrix<ref>''Broyden–Fletcher–Goldfarb–Shanno algorithm'', 12.Dec (2020). Retrieved from: https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm</ref>.<br />
<br />
==== BFGS Algorithm ====<br />
<br />
# Given the starting point <math>x_0</math>; convergence tolerance <math>\epsilon, \epsilon>0</math>; the initial estimation of Hessian matrix <math>B_0=I</math>; <math>k=0</math>.<br />
# Compute the search direction <math>d_k=-B_k^{-1}\cdot \bigtriangledown f_k</math>.<br />
# Compute the step length <math>\lambda_k</math> with a line search procedure that satisfies Wolfe conditions. And then set <br /> <math>s_k={\lambda}_k d_k</math>, <br /> <math>x_{k+1}=x_k+s_k</math><br />
# If <math>||\bigtriangledown f_{k+1}||<\epsilon</math>, then end of the iteration, otherwise continue step5.<br />
# Computing <math>y_k=\bigtriangledown f_{k+1}-\bigtriangledown f_k</math>.<br />
# Update<math>B_{k+1}</math> with <math>B_{k+1}=B_k+\frac{y_k y_k^T}{y_k^T s_k}-\frac{B_k s_k s_k^T B_k}{s_k^T B_k s_k} </math> <br /> Since we need to update <math>B_{k+1}^{-1}</math>, we can apply the Sherman-Morrison formula to avoid complicated calculation of inverse. <br /> With Sherman-Morrison formula, we can update <math>B_{k+1}^{-1}</math> with <br /> <math> B_{k+1}^{-1}=(I-\rho s_ky_k^T)B_k^{-1}(I-\rho y_ks_k^T)+\rho s_ks_k^T</math> , <math>\rho=\frac{1}{y_k^Ts_k}</math><br />
# Update <math>k</math> with <math>k=k+1</math> and go back to step2.<br /><br />
<br />
== Numerical Example ==<br />
The following is an example to show how to solve an unconstrained nonlinear optimization problem with the DFP method.<br />
<br />
<math>\text{min }\begin{align} f(x_1, x_2) & = x_1^2 +\frac{1}{2}x_2^2+3\end{align}</math><br />
<br />
<math>x_0=(1,2)^T </math><br />
<br />
'''Step 1:''' <br />
<br />
Usually, we set the approximation of the inverse of the Hessian matrix as an identity matrix with the same dimension as the Hessian matrix. In this case, <math>B_0</math> is a <math>2\times2</math> identity matrix.<br />
<br />
<math>B_0</math>: <br />
<math>\begin{pmatrix}<br />
1 & 0 \\<br />
0 & 1<br />
\end{pmatrix}</math><br />
<br />
<math>\bigtriangledown f_x</math>: <br />
<math>\begin{pmatrix}<br />
2x_1 \\<br />
x_2<br />
\end{pmatrix}</math><br />
<br />
<math>\epsilon=10^{-5}</math><br />
<br />
<math>k=0</math><br />
<br />
For convenience, we can set <math>\lambda=1</math>.<br />
<br />
'''Step 2:'''<br />
<br />
<math>d_0=-B_0^{-1}\bigtriangledown f_0</math><math>=-\begin{pmatrix}<br />
1 & 0 \\<br />
0 & 1<br />
\end{pmatrix}</math><math>\begin{pmatrix}<br />
2 \\<br />
2<br />
\end{pmatrix}</math><br />
<math>=\begin{pmatrix}<br />
-2 \\<br />
-2<br />
\end{pmatrix}</math><br />
<br />
'''Step 3:'''<br />
<br />
<math>s_0=d_0</math><br />
<br />
<math>x_1=x_0+s_0</math><math>=\begin{pmatrix}<br />
1 \\<br />
2<br />
\end{pmatrix}</math><math>+\begin{pmatrix}<br />
-2 \\<br />
-2<br />
\end{pmatrix}</math><math>=\begin{pmatrix}<br />
-1 \\<br />
0<br />
\end{pmatrix}</math><br />
<br />
'''Step 4:'''<br />
<br />
<math>\bigtriangledown f_0</math><math>=\begin{pmatrix}<br />
-2 \\<br />
0<br />
\end{pmatrix}</math><br />
<br />
Since <math>|\bigtriangledown f_0|</math> is not less than <math>\epsilon</math>, we need to continue.<br />
<br />
'''Step 5:'''<br />
<br />
<math>y_0=\bigtriangledown f_1-\bigtriangledown f_0</math><math>=\begin{pmatrix}<br />
-4 \\<br />
-2<br />
\end{pmatrix}</math><br />
<br />
'''Step 6:'''<br />
<math>B_1=B_0+\frac{s_0 s_0^T}{s_0^T y_0}-\frac{D_0 y_0 y_0^T D_0}{y_0^T D_0 y_0} </math><math>=\begin{pmatrix}<br />
1 & 0 \\<br />
0 & 1<br />
\end{pmatrix}</math><math>+\frac{1}{12}\begin{pmatrix}<br />
4 & 4 \\<br />
4 & 4<br />
\end{pmatrix}</math><math>-\frac{1}{20}\begin{pmatrix}<br />
16 & 8 \\<br />
8 & 4<br />
\end{pmatrix}</math> <math>=\begin{pmatrix}<br />
0.53333 & -0.0667 \\<br />
-0.0667 & 1.1333<br />
\end{pmatrix}</math><br />
<br />
And then go back to Step 2 with the update <math>B_1</math> to start a new iterate until <math>|\bigtriangledown f_k|<\epsilon</math>.<br />
<br />
We continue the rest of the steps in python and the results are listed below:<br />
<br />
Iteration times: 0 Result：[-1. 0.]<br />
<br />
Iteration times: 1 Result：[ 0.06666667 -0.13333333]<br />
<br />
Iteration times: 2 Result：[0.00083175 0.01330805]<br />
<br />
Iteration times: 3 Result：[-0.00018037 -0.00016196]<br />
<br />
Iteration times: 4 Result：[ 3.74e-06 -5.60e-07]<br />
<br />
After four times of iteration, we finally get the optimal solution, which can be assumed as <math>x_1=0, x_2=0</math> and the minimum of the objective function is 3.<br />
<br />
As we can see from the calculation in Step 6, though the updated formula for <math>B_1</math> looks complicated, it's actually not. We can see results of <math>s_0^T y_0</math> and <math>y_0^T D_0 y_0</math> are constant numbers and results of <math>s_0 s_0^T</math> and <math>D_0 y_0 y_0^T D_0</math> are matrix that with the same dimension as <math>B_1</math>. Therefore, the calculation of quasi-Newton methods is faster and simpler since it's related to some basic matrix calculations like inner product and outer product.<br />
<br />
== Application ==<br />
<br />
Quasi-newton methods are applied to various areas such as physics, biology, engineering, geophysics, chemistry, and industry to solve the nonlinear systems of equations because of their faster calculation. '''''The ICUM (Inverse Column-Updating Method)''''', one type of quasi-Newton methods, is not only efficient in solving large scale sparse nonlinear systems but also perfumes well in not necessarily large-scale systems in real applications. It is used to solve '''''the Two-pint ray tracing problem''''' in geophysics. A two-point ray tracing problem consists of constructing a ray that joins two given points in the domain and it can be formulated as a nonlinear system. ICUM can also be applied to '''''estimate the transmission coefficients for AIDS and for Tuberculosis''''' in Biology, and in '''''Multiple target 3D location airborne ultrasonic system'''''. <ref>Pérez, Rosana, and Véra Lucia Rocha Lopes. "Recent applications and numerical implementation of quasi-Newton methods for solving nonlinear systems of equations." Numerical Algorithms 35.2-4 (2004): 261-285.</ref> <br />
<br />
Moreover, they can be applied and developed into the Deep Learning area as sampled quasi-Newton methods to help make use of more reliable information.<ref> Berahas, Albert S., Majid Jahani, and Martin Takáč. "Quasi-newton methods for deep learning: Forget the past, just sample." arXiv preprint arXiv:1901.09997 (2019). </ref> The methods they proposed sample points randomly around the current iterate at each iteration to create Hessian or inverse Hessian approximations, which is different from the classical variants of quasi-Newton methods. As a result, the approximations constructed make use of more reliable (recent and local) information and do not depend on past iterate information that could be significantly stale. In their work, numerical tests on a toy classification problem and on popular benchmarking neural network training tasks show that the methods outperform their classical variants.<br />
<br />
Besides, to make quasi-Newton methods more available, they are integrated into programming languages so that people can use them to solve nonlinear optimization problems conveniently, for example, [http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationQuasiNewtonMethods.html Mathematic (quasi-Newton solvers)], [http://www.mathworks.com/help/toolbox/optim/ug/fminunc.html MATLAB (Optimization Toolbox)], [http://finzi.psych.upenn.edu/R/library/stats/html/optim.html R], [http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html SciPy] extension to Python.<br />
<br />
== Conclusion ==<br />
<br />
Quasi-Newton methods are a milestone in solving nonlinear optimization problems, they are more efficient than Newton's method in large-scale optimization problems because they don't need to compute second derivatives, which makes calculation less costly. Because of their efficiency, they can be applied to different areas and remain appealing.<br />
<br />
== References ==</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=26972020 Cornell Optimization Open Textbook Feedback2020-12-21T10:19:12Z<p>Wc593: /* Network flow problem */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
== [[Markov decision process]] ==<br />
<br />
* Introduction<br />
*# Please fix typos such as “discreet”.<br />
* Theory and Methodology<br />
*# If abbreviations are defined like MDP, use the abbreviations throughout the Wiki<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Facility location problem]]==<br />
<br />
* Numerical Example<br />
*# Mention how the formulated problem is coded and solved. No need to provide GAMS code.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Integer linear program formulation & Approximation via LP relaxation and rounding<br />
*# Use proper math notations for “greater than equal to”.<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Column generation algorithms]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory, methodology and algorithmic discussions<br />
*# Some minor typos/article agreement issues exist “is not partical in real-world”.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Please use proper symbol for "greater than or equal to".<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Network_flow_problem&diff=2696Network flow problem2020-12-21T10:18:41Z<p>Wc593: </p>
<hr />
<div>Author: Aaron Wheeler, Chang Wei, Cagla Deniz Bahadir, Ruobing Shui, Ziqiu Zhang (CHEME 6800 Fall 2020)<br />
<br />
== Introduction ==<br />
Network flow problems arise in several key instances and applications within society and have become fundamental problems within computer science, operations research, applied mathematics, and engineering. Developments in the approach to tackle these problems resulted in algorithms that became the chief instruments for solving problems related to large-scale systems and industrial logistics. Spurred by early developments in linear programming, the methods for addressing these extensive problems date back several decades and they evolved over time as the use of digital computing became increasingly prevalent in industrial processes. Historically, the first instance of an algorithmic development for the network flow problem came in 1956, with the network simplex method formulated by George Dantzig.<sup>[1]</sup> A variation of the simplex algorithm that revolutionized linear programming, this method leveraged the combinatorial structure inherent to these types of problems and demonstrated incredibly high accuracy.<sup>[2]</sup> This method and its variations would go on to define the embodiment of the algorithms and models for the various and distinct network flow problems discussed here.<br />
<br />
== Theory, Methodology, and Algorithms ==<br />
The network flow problem can be conceptualized as a directed graph which abides by flow capacity and conservation constraints. The vertices in the graph are classified into origins (source <math>X</math>), destinations (sink <math>O</math>), and intermediate points and are collectively referred to as nodes (<math>N</math>). These nodes are different from one another such that <math>N_i \neq X,O,\ldots N_j</math>.<sup>[3]</sup> The edges in the directed graph are the directional links between nodes and are referred to as arcs (<math>A</math>). These arcs are defined with a specific direction <math>(i, j)</math> that corresponds to the nodes they are connecting. The arcs <math>A\subseteq (i,j)</math> are also defined by a specific flow capacity <math>c(A)>0</math> that cannot be exceeded. The supply and demand of units <math>\Sigma_i u_i=0~for~i\in N</math> are formulated by negative and positive flow notation, and are defined such that sources equate to positive values (supply) and sinks equate to negative values (demand). Intermediate nodes have no net supply or demand. Figure 1 illustrates this general definition of the network.<br />
[[File:Picture1.png|thumb|Figure 1. General Network Flow Problem]]<br />
<br />
Additional constraints of the network flow optimization model place limits on the solution and vary significantly based on the specific type of problem being solved. Historically, the classic network flow problems are considered to be the maximum flow problem and the minimum-cost circulation problem, the assignment problem, bipartite matching problem, transportation problem, and the transshipment problem.<sup>[2]</sup> The approach to these problems become quite specific based upon the problem’s objective function but can be generalized by the following iterative approach: 1. determining the initial basic feasible solution; 2. checking the optimality conditions (i.e. whether the problem is infeasible, unbounded over the feasible region, optimal solution has been found, etc.); and 3. constructing an improved basic feasible solution if the optimal solution has not been determined.<sup>[3]</sup><br />
=== General Applications ===<br />
<br />
==== The Assignment Problem ====<br />
Various real-life instances of assignment problems exist for optimization, such as assigning a group of people to different tasks, events to halls with different capacities, rewards to a team of contributors, and vacation days to workers. All together, the assignment problem is a bipartite matching problem in the kernel. <sup>[3]</sup> In a classical setting, two types of objects of equal amount are bijective (i.e. they have one-to-one matching), and this tight constraint ensures a perfect matching. The objective is to minimize the cost or maximize the profit of matching, since different items of two types have distinct affinity. [[File:Assignment.png|thumb|Figure 2. Classic model of assignment problem|alt=|267x267px]]A classic example is as follows: suppose there are <math> n </math> people (set <math> P </math>) to be assigned to <math> n </math> tasks (set <math> T </math>). Every task has to be completed and each task has to be handled by only one person, and <math> c_{ij} </math>, usually given by a table, measures the benefits gained by assigning the person <math> i </math> (in <math> P </math>) to the task <math> j </math> (in <math> T </math>). <sup>[4]</sup> The natural objective here is to maximize the overall benefits by devising the optimal assignment pattern. A graph of the general assignment problem and a table of preference are depicted as Figure 2 and Table 2.<br />
{| class="wikitable sortable"<br />
|+Table 1. Table of preference<br />
!Benefits<br />
!Task 1<br />
! Task 2<br />
!Task 3<br />
!...<br />
!Task n<br />
|-<br />
!Person 1<br />
|0<br />
|3<br />
|5<br />
|...<br />
|2<br />
|-<br />
!Person 2<br />
|2<br />
|1<br />
|3<br />
|...<br />
|6<br />
|-<br />
!Person 3<br />
|1<br />
|4<br />
|0<br />
|...<br />
|3<br />
|-<br />
!...<br />
|...<br />
|...<br />
|...<br />
|...<br />
|...<br />
|-<br />
!Person n<br />
|0<br />
|2<br />
|3<br />
|...<br />
|3<br />
|}<br />
Figure 2 can be viewed as a network. The nodes represent people and tasks, and the edges represent potential assignments between a person and a task. Each task can be completed by any person. However, the person that actually ends up being assigned to the task will be the lone individual who is best suited to complete. In the end, the edges with positive flow values will be the only ones represented in the finalized assignment. <sup>[5]</sup><br />
<br />
To approach this problem, the binary variable <math> x_{ij} </math> is defined as whether the person <math> i </math> is assigned to the task <math> j </math>. If so, <math> x_{ij} </math> = 1, and <math> x_{ij} </math> = 0 otherwise.<br />
<br />
The concise-form formulation of the problem is as follows <sup>[3]</sup>:<br />
<br />
max <math>z=\sum_{i=1}^n\sum_{j=1}^n c_{ij}x_{ij}</math><br />
<br />
Subject to:<br />
<br />
<math>\sum_{j=1}^n x_{ij}=1~~\forall i\in [1,n]<br />
</math><br />
<br />
<math>\sum_{I=1}^n x_{ij}=1~~\forall j\in [1,n]<br />
</math><br />
<br />
<math>x_{ij}=0~or~1~~\forall i,j\in [1,n] </math><br />
<br />
<br />
<br />
The first constraint captures the requirement of assigning each person to a single task. The second constraint indicates that each task must be done by exactly one person. The objective function sums up the overall benefits of all assignments.<br />
<br />
To see the analogy between the assignment problem and the network flow, we can describe each person supplying a flow of 1 unit and each task demanding a flow of 1 unit, with the benefits over all “channels” being maximized. <sup>[3]</sup><br />
<br />
A potential issue lies in the branching of the network, specifically an instance where a person splits its one unit of flow into multiple tasks and the objective remains maximized. This shortcoming is allowed by the laws that govern the network flow model, but are unfeasible in real-life instances. Fortunately, since the network simplex method only involves addition and subtraction of a single edge while transferring the basis, which is served by the spanning tree of the flow graph, if the supply (the number of people here) and the demand (the number of tasks here) in the constraints are integers, the solved variables will be automatically integers even if it is not explicitly stated in the problem. This is called the integrality of the network problem, and it certainly applies to the assignment problem. <sup>[6]</sup><br />
<br />
==== The Transportation Problem ====<br />
People first came up with the transportation problem when distributing troops during World War II. <sup>[7]</sup> Now, it has become a useful model for solving logistics problems, and the objective is usually to minimize the cost of transportation. <br />
<br />
Consider the following scenario:<br />
<br />
There are 2 chemical plants located in 2 different places: <math> M </math> and <math> N </math>. There are 3 raw material suppliers in other 3 locations: <math> F </math>, <math> G </math>, and <math> H </math>. The amount of materials from a supplier can be arbitrarily divided into two parts and shipped to two factories. Supplier <math> F </math>, <math> G </math>, and <math> H </math> can provide <math> S_1 </math>, <math> S_2 </math>, and <math> S_3 </math> amounts of materials respectively. The chemical plants located at <math> M </math> and <math> N </math> have the material demand of <math> D_1 </math> and <math> D_2 </math> respectively. Each transportation route, from suppliers to chemical plants, is attributed with a specific cost. This model raises the question: to keep the chemical plants running, what is the best way to arrange the material from the suppliers so that the transportation cost could be minimized? <br />
[[File:Transportation problem example.png|thumb|Figure 3. Transportation problem example]]<br />
Several quantities should be defined to help formulate the frame of the solution:<br />
<br />
<math>S_{i} <br />
</math> = the amount of material provided at the supplier <math>i <br />
</math><br />
<br />
<math>D_{j} <br />
</math> = the amount of material being consumed at the chemical plant <math>j <br />
</math><br />
<br />
<math>x_{ij} <br />
</math> = the amount of material being transferred from supplier <math>i <br />
</math> to chemical plant <math display="inline">j <br />
</math><br />
<br />
<math>C_{ij} <br />
</math> = the cost of transferring 1 unit of material from supplier <math>i <br />
</math> to chemical plant <math>j <br />
</math> <br />
<br />
<math>x_{ij} <br />
</math><math>C_{ij} <br />
</math> = the cost of the material transportation from <math>i <br />
</math> to <math>j <br />
</math><br />
<br />
Here, the amount of material being delivered and being consumed is bound to the supply and demand constraints:<br />
<br />
(1): The amount of material shipping from supplier <math>i <br />
</math> cannot exceed the amount of material available at supplier <math>i <br />
</math>. <br />
<br />
<math>\sum_j^n x_{ij}\ \leq S_{i} \qquad \forall i\in I=[1,m] <br />
</math><br />
<br />
(2): The amount of material arrived at chemical plant <math>j <br />
</math> should at least fulfill the demand at chemical plant <math>j <br />
</math>. <br />
<br />
<math>\sum_i^m x_{ij}\ \geq D_{j} \qquad \forall j\in J=[1,n] <br />
</math><br />
<br />
The objective is to find the minimum cost of transportation, so the cost of each transportation line should be added up, and the total cost should be minimized. <br />
<br />
<math>\sum_i^m \sum_j^n x_{ij}\ C_{ij} <br />
</math><br />
<br />
Using the definitions above, the problem can be formulated as such:<br />
<br />
min<math> \quad z = \sum_i^m \sum_j^n x_{ij}\ C_{ij}<br />
<br />
</math><br />
<br />
<math>s.t. \quad\ \sum_j^n x_{ij}\ \leq S_{i} \qquad \forall i\in I=[1,m] <br />
</math><br />
<br />
<math>\sum_i^m x_{ij}\ \geq D_{j} \qquad \forall j\in J=[1,n] <br />
</math><br />
<br />
However, the problem is not complete at this point because there is no constraint for <math>x_{ij} <br />
</math>, and that means <math>x_{ij} <br />
</math> can be any number, even negative. In order for <math>x_{ij} <br />
</math> to make sense physically, a lower bound of zero is mandatory, which corresponds to the situation where no material was transported from <math>i <br />
</math> to <math>j <br />
</math>. Adding the last constraint will complete this formulation as such:<br />
<br />
min<math> \quad z = \sum_i^m \sum_j^n x_{ij}\ C_{ij}<br />
<br />
</math><br />
<br />
<math>s.t. \quad\ \sum_j^n x_{ij}\ \leq S_{i} \qquad \forall i\in I=[1,m] <br />
</math><br />
<br />
<math>\sum_i^m x_{ij}\ \geq D_{j} \qquad \forall j\in J=[1,n] <br />
</math><br />
<br />
<math>x_{ij}\ \geq 0 <br />
</math><br />
<br />
The problem and the formulation is adapted from Chapter 8 of the book: Applied Mathematical Programming by Bradley, Hax and Magnanti. <sup>[3]</sup><br />
<br />
==== The Shortest-Path Problem ====<br />
The shortest-path problem can be defined as finding the path that yields the shortest total distance between the origin and the destination. Each possible stop is a node and the paths between these nodes are edges incident to these nodes, where the path distance becomes the weight of the edges. In addition to being the most common and straightforward application for finding the shortest path, this model is also used in various applications depending on the definition of nodes and edges. <sup>[3]</sup> For example, when each node represents a different object and the edge specifies the cost of replacement, the equipment replacement problem is derived. Moreover, when each node represents a different project and the edge specifies the relative priority, the model becomes a project scheduling problem.<br />
[[File:Shortest-Path.png|thumb|443x443px|Figure 4. General form of shortest-path problem]]<br />
A graph of the general shortest-path problem is depicted as Figure 4:<br />
<br />
In the general form of the shortest-path problem, the variable <math> x_{ij} </math> represents whether the edge <math> (i,j) </math> is active (i.e. with a positive flow), and the parameter <math> c_{ij} </math> (e.g. <math> c_{12} </math> = 6) defines the distance of the edge <math> (i,j) </math>. The general problem is formulated as below:<br />
<br />
min <math>z=\sum_{i=1}^n \sum_{j=1}^n c_{ij}x_{ij}</math><br />
<br />
Subject to:<br />
<br />
<math>\sum_{j=1}^n x_{ij} - \sum_{k=1}^n x_{ki} = \begin{cases} 1 & \text{if }i=s\text{ (source)} \\ 0 & \text{otherwise} \\ -1 & \text{if }i=t \text{ (sink)} \end{cases}</math><br />
<br />
<math>x_{ij}\geq 0~~\forall (i,j)\in E</math><br />
<br />
<br />
The first term of the constraint is the total outflow of the node i, and the second term is the total inflow. So, the formulation above could be seen as one unit of flow being supplied by the origin, one unit of flow being demanded by the destination, and no net inflow or outflow at any intermediate nodes. These constraints mandate a flow of one unit, amounting to the active path, from the origin to the destination. Under this constraint, the objective function minimizes the overall path distance from the origin to the destination.<br />
<br />
Similarly, the integrality of the network problem applies here, precluding the unreasonable fractioning. With supply and demand both being integer (one here), the edges can only have integer amount of flow in the result solved by simplex method. <sup>[6]</sup><br />
<br />
In addition, the point-to-point model above can be further extended to other problems. A number of real life scenarios require visiting multiple places from a single starting point. This “Tree Problem” can be modeled by making small adjustments to the original model. In this case, the source node should supply more units of flow and there will be multiple sink nodes demanding one unit of flow. Overall, the objective and the constraint formulation are similar. <sup>[4]</sup><br />
<br />
==== Maximal Flow Problem ====<br />
This problem describes a situation where the material from a source node is sent to a sink node. The source and sink node are connected through multiple intermediate nodes, and the common optimization goal is to maximize the material sent from the source node to the sink node. <sup>[3]</sup><br />
<br />
Consider the following scenario:<br />
[[File:Picture2.png|thumb|Figure 5. Maximal flow problem example]]<br />
The given structure is a piping system. The water flows into the system from the source node, passing through the intermediate nodes, and flows out from the sink node. There is no limitation on the amount of water that can be used as the input for the source node. Therefore, the sink node can accept an unlimited amount of water coming into it. The arrows denote the valid channel that water can flow through, and each channel has a known flow capacity. What is the maximum flow that the system can take?<br />
<br />
Several quantities should be defined to help formulate the frame of the solution: <br />
[[File:Picture3.png|thumb|Figure 6. For every intermediate node j, there is a group of node i and a group of node k.]]<br />
For any intermediate node <math display="inline">j <br />
</math> in the system, it receives water from adjacent node(s) <math>i <br />
</math>, and sends water to the adjacent node(s) <math display="inline">k<br />
<br />
</math>. The node <math>i <br />
</math> and k are relative to the node <math display="inline">j <br />
</math>. <br />
<br />
<math>i <br />
</math> = the node(s) that gives water to node <math display="inline">j <br />
</math><br />
<br />
<math display="inline">j <br />
</math> = the intermediate node(s) <br />
<br />
<math display="inline">k<br />
<br />
</math> = the node(s) that receives the water coming out of node <math display="inline">j <br />
</math><br />
<br />
<math>x_{ij} <br />
</math> = amount of water leaving node <math>i <br />
</math> and entering node <math display="inline">j <br />
</math> (<math>i <br />
</math> and <math display="inline">j <br />
</math> are adjacent nodes)<br />
<br />
<math>x_{jk} <br />
</math> = amount of water leaving node <math display="inline">j <br />
</math> and entering node <math display="inline">k<br />
<br />
</math> (<math>i <br />
</math> and <math display="inline">k<br />
<br />
</math> are adjacent nodes)<br />
<br />
<br />
For the source and sink node, they have net flow that is non-zero:<br />
<br />
<math display="inline">m<br />
</math> = source node<br />
<br />
<math display="inline">n<br />
</math> = sink node<br />
<br />
<math>x_{in} <br />
</math> = amount of water leaving node <math>i <br />
</math> and entering sink node <math display="inline">n<br />
</math> (<math>i <br />
</math> and <math display="inline">n<br />
</math> are adjacent nodes)<br />
<br />
<math>x_{mk} <br />
</math> = amount of water leaving source node <math display="inline">m<br />
</math> and entering node <math display="inline">k<br />
<br />
</math> (<math display="inline">m<br />
</math> and <math display="inline">k<br />
<br />
</math> are adjacent nodes)<br />
<br />
<br />
Flow capacity definition is applied to all nodes (including intermediate nodes, the sink, and the source):<br />
<br />
<math>C_{ab} <br />
</math> = transport capacity between any two nodes <math display="inline">a<br />
</math> and <math display="inline">b<br />
</math> (<math display="inline">a<br />
</math><math> \neq<br />
</math><math display="inline">b<br />
</math>)<br />
<br />
<br />
The main constraints for this problem are the transport capacity between each node and the material conservation:<br />
<br />
(1): The amount of water flowing from any node <math display="inline">a<br />
</math> to node <math display="inline">b<br />
</math> should not exceed the flow capacity between node <math display="inline">a<br />
</math> to node <math display="inline">b<br />
</math> . <br />
<br />
<math>0\leq x_{ab} \leq C_{ab} <br />
</math><br />
<br />
(2): The intermediate node <math display="inline">j <br />
</math> does not hold any water, so the amount of water that flows into node <math display="inline">j <br />
</math> has to exit the node with the exact same amount it entered with. <br />
<br />
<math>\sum_i^px_{ij}- \sum_k^r x_{jk} =0<br />
\qquad \begin{cases} \forall i\in I=[1,p] \\ \forall j\in J=[1,q]\\ \forall k\in K=[1,r] \end{cases} <br />
</math><br />
<br />
Overall, the net flow out of the source node has to be the same as the net flow into the sink node. This net flow is the amount that should be maximized. <br />
<br />
Using the definitions above:<br />
[[File:Picture4.png|thumb|Figure 7. The imaginary flow connects the sink node to the source node, creating a close loop.]]<br />
min<math> \quad z = \sum_k^r x_{uk}<br />
<br />
</math> (or <math>\sum_i^p x_{iv}<br />
<br />
</math>)<br />
<br />
<math>s.t. \quad\ \sum_i^px_{ij}- \sum_k^r x_{jk} =0<br />
\qquad \begin{cases} \forall i\in I=[1,p] \\ \forall j\in J=[1,q]\\ \forall k\in K=[1,r] \end{cases} <br />
</math><br />
<br />
<math>0\leq x_{ab} \leq C_{ab} <br />
</math><br />
<br />
This expression can be further simplified by introducing an imaginary flow from the sink to the source. <br />
<br />
By introducing this imaginary flow, the piping system is now closed. The mass conservation constraint now also holds for the source and sink node, so they can be treated as the intermediate nodes. The problem can be rewritten as the following: <br />
<br />
min<math> \quad z = x_{vu}<br />
<br />
</math><br />
<br />
<math>s.t. \quad\ \sum_i^px_{ij}- \sum_k^r x_{jk} =0<br />
\qquad \begin{cases} \forall i\in I=[1,p] \\ \forall j\in J=[1,q+2]\\ \forall k\in K=[1,r] \end{cases} <br />
</math><br />
<br />
<math>0\leq x_{ab} \leq C_{ab} <br />
</math><br />
<br />
The problem and the formulation are derived from an example in Chapter 8 of the book: Applied Mathematical Programming by Bradley, Hax and Magnanti. <sup>[3]</sup><br />
<br />
=== Algorithms ===<br />
<br />
==== Ford–Fulkerson Algorithm ====<br />
A broad range of network flow problems could be reduced to the max-flow problem. The most common way to approach the max-flow problem in polynomial time is the Ford-Fulkerson Algorithm (FFA). FFA is essentially a greedy algorithm and it iteratively finds the augmenting s-t path to increase the value of flow. The pathfinding terminates until there is no s-t path present. Ultimately, the max-flow pattern in the network graph will be returned. <sup>[8]</sup><br />
<br />
Typically, FFA is applied to flow networks with only one source node s and one sink node t. In addition, the capacity conditions and the conservation conditions, which are two properties defining the flow, must be satisfied.<sup>[9]</sup> The capacity conditions require that each edge carry a flow that is no more than its capacity, or <math>0\leq f(e)\leq c_{e},\forall e\in E</math>, where function f returns the flow on a certain edge. The conservation conditions require all nodes except the source and the sink to have a net flow of 0, or ,<math>\sum_{e~into~v}f(v)= \sum_{e~out~of~v}f(v),\forall v\in V-{s,t} </math>. <br />
<br />
FFA introduces the concept of residue graph based on the original graph <math>G</math> to allow backtracking, or pushing backward on edges that are already carrying flow.<sup>[9]</sup> The residue graph <math>G_{f} </math>is defined as the following:<br />
<br />
1. <math>G_{f}</math>has exactly the same node set as <math>G</math>.<br />
<br />
2. For each edge <math>e = (u,v)</math>with a nonnegative flow <math> f( e)</math> in <math>G</math>, <math>G_{f}</math>has the edge e with the capacity <math>c(e)_{f} = c_{e} - f(e)</math>, and also <math>G_f</math> has the edge <math>e' = (v,u)</math> with the capacity <math>c(e')_{f} = f(e)</math>.<br />
<br />
Note that initially, the <math>G_{f} </math> is identical to <math>G</math> since there is no flow present in <math>G</math>.<br />
<br />
The steps of FFA are as below. <sup>[10]</sup> Essentially, the method repeatedly finds a path with positive flow in the residue graph, and updates the flow graph and residue graph until <math>s</math> and <math>t</math> become disjoint in the residue graph.<br />
<br />
1. Set <math>f(e) = 0, \forall e\in E</math>in <math>G</math>, and create a copy as <math>G_{f}</math>.<br />
<br />
2. While there is still a <math>s, t</math> path <math>p</math> in <math>G_{f}</math>:<br />
<br />
a. Find <math>c_{f}(p) = min(c_{f}(e):e\in p)</math><br />
<br />
b. For each edge <math>e\in p</math>:<br />
<br />
bi. <math>f(e) = f(e) + c_{f}(p)</math> if <math>e\in E</math> in <math>G</math>, <math>f(e) = f(e) - c_{f}(p)</math> if <math>e'\in E</math> in <math>G</math><br />
<br />
bii. <math>c(e)= c(e) - c_{f}(p),c(e')= c(e') + c_{f}(p)</math> in <math> G_{f}</math><br />
<br />
[[File:Phase 1.png|thumb|Figure 8: Flow graph and residue graph at the first phase]]<br />
An example of running the FFA is as below.<br />
The flow graph <math>G</math> and residue graph<math>G_{f}</math> at the initial phase is depicted in Figure 8, where the number of each edge in the flow graph is the flow units on the edge, whereas it is the updated edge capacity in the residue graph.<br />
<br />
In the residue graph, an <math>s-t</math> path can be found in the residue graph tracing the edge <math>s\rightarrow A\rightarrow B\rightarrow t</math> with the flow of two units. After augmenting the path on both graphs, the flow graph and the residue graph look like the Figure 9.<br />
<br />
[[File:Phase 2.png|thumb|Figure 9: Flow graph and residue graph after updating with the first s,t-path]]<br />
<br />
At this stage, there is still <math>s,t</math>-path in the residue graph <math>s\rightarrow B\rightarrow A\rightarrow t</math> with a flow of one unit. After augmenting the path on both graphs, the flow graph and the residue graph look like the Figure 10.<br />
<br />
[[File:Phase 3.png|thumb|Figure 10: Flow graph and residue graph after augmenting with the second s,t-path]]<br />
<br />
At this stage, there is no more <math>s,t</math>-path in the residue graph, so FFA terminates and the maximum flow can be read from the flow graph as 3 units.<br />
<br />
== Numerical Example and Solution ==<br />
<br />
A Food Distributor Company is farming and collecting vegetables from farmers to later distribute to the grocery stores. The distributor has specific agreements with different third-party companies to mediate the delivery to the grocery stores. In a particular month, the company has 600 ton vegetables to deliver to the grocery store. They have agreements with two third-party transport companies A and B, which have different tariffs for delivering goods between themselves, the distributor, and the grocery store. They also have limits on transport capacity for each path. These delivery points are numbered as shown below, with path 1 being the transport from the Food Distributor Company to the transport company A. The limits and tariffs for each path can be found in the Table 2 below, and the possible transportation connections between the distributor company, the third-party transporters, and the grocery store are shown in the figure below. The distributor companies cannot hold any amount of food, and any incoming food should be delivered to an end point. The distributor company wants to minimize the overall transport cost of shipping 600 tons of vegetables to the grocery store by choosing the optimal path provided by the transport companies. How should the distributor company map out their path and the amount of vegetables carried on each path to minimize cost overall?<br />
[[File:Wiki example.png|thumb|Figure. 11. Illustration of the network for the food distribution problem.]]<br />
{| class="wikitable"<br />
|+Table 2. Product Limits and Tariffs for each Path<br />
|<br />
|1<br />
|2<br />
|3<br />
|4<br />
|5<br />
|6<br />
|-<br />
|Product limit (ton)<br />
|250<br />
|450<br />
|350<br />
|200<br />
|300<br />
|500<br />
|-<br />
|Tariff ($/ton)<br />
|10<br />
|12.5<br />
|5<br />
|7.5<br />
|10<br />
|20<br />
|}<br />
<br />
<br />
This question is adapted from one of the exercise questions in chapter 8 of the book: Applied Mathematical Programming by Bradley, Hax and Magnanti <sup>[3]</sup>.<br />
<br />
=== Formulation of the Problem ===<br />
The problem can be formulated as below where variables <math>x_1, x_2, x_3,..., x_6</math> denote the tons of vegetables carried in paths 1 to 6. The objective function stated in the first line is to minimize the cost of the operation, which is the summation of the tons of vegetables carried on each path multiplied by the corresponding tariff: <math>\sum_{i=1}^6 x_i t_i</math>. <br />
<br />
<math>\begin{array}{lcl} \min z = 10x_1 + 12.5x_2 + 5x_3 + 7.5x_4 + 10x_5 + 20x_6 \\ s.t. \qquad x_5 = x_1 - x_3 + x_4 \\ \ \ \ \quad \qquad x_6 = x_2 + x_3 - x_4 \\ \ \ \ \quad \qquad x_5 + x_6 = 600 \\ \ \ \ \quad \qquad x_1 + x_2 = 600 \\ \ \ \ \quad \qquad x_1 \leq 250 \\ \ \ \ \quad \qquad x_2 \leq 450 \\ \ \ \ \quad \qquad x_3 \leq 350 \\ \ \ \ \quad \qquad x_4 \leq 200 \\ \ \ \ \quad \qquad x_5 \leq 300 \\ \ \ \ \quad \qquad x_6 \leq 500 \\ \ \ \ \quad \qquad x_1, x_2, x_3, x_4, x_5, x_6 \geq 0\\\end{array}<br />
<br />
<br />
<br />
<br />
</math> <br />
<br />
The second step is to write down the constraints. The first constraint ensures that the net amount present in the Transport Company A, which is the deliveries received from path 1 and path 2 minus the transport to Transport Company B should be delivered to the grocery store with path 5. The second constraint ensures this for the Transport Company B. The third and fourth constraints are ensuring that the total amount of vegetables shipping from the Food Distributor Company and the total amount of vegetables delivered to the grocery store are both 600 tons. The constraints 5 to 10 depict the upper limits of the amount of vegetables that can be carried on paths 1 to 6. The final constraint depicts that all variables are non-negative. <br />
<br />
=== Solution of the Problem ===<br />
This problem can be solved using Simplex Algorithm<sup>[11]</sup> or with the CPLEX Linear Programming solver in GAMS optimization platform. The steps of the solution using the GAMS platform is as follows:<br />
<br />
The first step is to list the variables, which are the tons of vegetables that will be transported in routes 1 to 6. The paths can be denoted as<math>x_1, x_2, x_3,..., x_6</math> . The objective function is the overall cost: z.<br />
<br />
'''variables x1,x2,x3,x4,x5,x6,z;'''<br />
<br />
The second step is to list the equations which are the constraints and the objective function. The objective function is a summation of the amount of vegetables carried in path i, multiplied with the tariff of path i for all i: <math>\sum_{i=1}^6 x_i t_i</math>. The GAMS code for the objective function is written below:<br />
<br />
'''obj.. z=e= 10*x1+12.5*x2+5*x3+7.5*x4+10*x5+20*x6;'''<br />
<br />
Overall, there are 10 constraints in this problem. The constraints 1, and 2 are equations for the paths 5 and 6. The amount carried in path 5 can be found by summing the amount of vegetables incoming to Transport Company A from path 1 and path 4, minus the amount of vegetables leaving Transport Company A with path 3. This can be attributed to the restriction that barrs the companies from keeping any vegetables and requires them to eventually deliver all the incoming produce. The equality 1 ensures that this constraint holds for path 5 and equation 2 ensures it for path 6. A sample of these constraints is written below for path 5:<br />
<br />
'''c1.. x5 =e=x1-x3+x4;'''<br />
<br />
Constraint 3 ensures that the sum of vegetables carried in path 1 and path 2 add to the total of 600 tons of vegetables that leave the Food Distributor Company. Likewise, the constraint 4 ensures that the sum amount of food transported in paths 5 and 6 adds up to 600 tons of vegetables that have to be delivered to the grocery store. A sample of these constraints is written below for the total delivery to the grocery store:<br />
<br />
'''c3.. x5+x6=e=600;'''<br />
<br />
Constraints 5 to 10 should ensure that the amount of food transported in each path should not exceed the maximum capacity depicted in the table. A sample of these constraints is written below for the capacity of path 1:<br />
<br />
'''c5.. x1=l=250;'''<br />
<br />
After listing the variables, objective function and the constraints, the final step is to call the CPLEX solver and set the type of the optimization problem as '''lp''' (linear programming). In this case the problem will be solved with a Linear Programming algorithm to minimize the objective (cost) function.<br />
<br />
The GAMS code yields the results below:<br />
<br />
'''x1 = 250, x2 = 350, x3 = 0, x4 = 50, x5 = 300, x6 = 300, z =16250.'''<br />
<br />
== Real Life Applications ==<br />
Network problems have many applications in all kinds of areas such as transportation, city design, resource management and financial planning.<sup>[6]</sup><br />
<br />
There are several special cases of network problems, such as the shortest path problem, minimum cost flow problem, assignment problem and transportation problem.<sup>[6]</sup> Three application cases will be introduced here.<br />
<br />
=== The Minimum Cost Flow Problem ===<br />
[[File:Pic8.jpg|thumb|Figure. 12. Illustration of the ship subnetwork.<sub>[14]</sub>]]<br />
[[File:Pic9.jpg|thumb|Figure. 13. Illustration of cargo subnetwork.<sub>[14]</sub>]]<br />
Minimum cost flow problems are pervasive in real life, such as deciding how to allocate temporal quay crane in container terminals, and how to make optimal train schedules on the same railroad line.<sup>[12]</sup><br />
<br />
R. Dewil and his group use MCNFP to assist traffic enforcement.<sup>[13]</sup> Police patrol “hot spots”, which are areas where crashes occur frequently on highways. R. Dewil studies a method intended to estimate the optimal route of hot spots. He describes the time it takes to move the detector to a certain position as the cost, and the number of patrol cars from one node to next as the flow, in order to minimize the total cost.<sup>[13]</sup><br />
<br />
=== The Assignment Problem ===<br />
Dung-Ying Lin studies an assignment problem in which he aims to assign freights to ships and arrange transportation paths along the Northern Sea Route in a manner which yields maximum profit.<sup>[14]</sup> Within this network composed of a ship subnetwork and a cargo subnetwork( shown as Figure 12 and Figure 13), each node corresponds to a port at a specific time and each arc represents the movement of a ship or a cargo. Other types of assignment problems are faculty scheduling, freight assignment, and so on.<br />
<br />
=== The Shortest Path Problem ===<br />
Shortest path problems are also present in many fields, such as transportation, 5G wireless communication, and implantation of the global dynamic routing scheme.<sup>[15][16][17]</sup><br />
<br />
Qiang Tu and his group studies the constrained reliable shortest path (CRSP) problem for electric vehicles in the urban transportation network. <sup>[15]</sup> He describes the reliable travel time of path as the objective item, which is made up of planning travel time of path and the reliability item. The group studies the Chicago sketch network consisting of 933 nodes and 2950 links and the Sioux Falls network consisting of 24 nodes and 76 links. The results show that the travelers’ risk attitudes and properties of electric vehicles in the transportation network can have a great influence on the path choice.<sup>[15]</sup> The study can contribute to the invention of the city navigation system.<br />
<br />
== Conclusion ==<br />
Since its inception, the network flow problem has provided humanity with a straightforward and scalable approach for several large-scale challenges and problems. The Simplex algorithm and other computational optimization platforms have made addressing these problems routine, and have greatly expedited efforts for groups concerned with supply-chain and other distribution processes. The formulation of this problem has had several derivations from its original format, but its overall methodology and approach have remained prevalent in several of society’s industrial and commercial processes, even over half a century later. Classical models such as the assignment, transportation, maximal flow, and shortest path problem configurations have found their way into diverse settings, ranging from streamlining oil distribution networks along the gulf coast to arranging optimal scheduling assignments for college students amidst a global pandemic. All in all, the network flow problem and its monumental impact, have made it a fundamental tool for any group that deals with combinatorial data sets. And with the surge in adoption of data-driven models and applications within virtually all industries, the use of the network flow problem approach will only continue to drive innovation and meet consumer demands for the foreseeable future.<br />
<br />
== References ==<br />
1. Karp, R. M. (2008). [https://www.sciencedirect.com/science/article/pii/S1572528607000370/ George Dantzig’s impact on the theory of computation]. Discrete Optimization, 5(2), 174-185.<br />
<br />
2. Goldberg, A. V. Tardos, Eva, Tarjan, Robert E. (1989). [http://www.cs.cornell.edu/~eva/Network.Flow.Algorithms.pdf/ Network Flow Algorithms, Algorithms and Combinatorics]. 9. 101-164.<br />
<br />
3. Bradley, S. P. Hax, A. C., & Magnanti, T. L. (1977). Network Models. [http://web.mit.edu/15.053/www/AMP.htm/ Applied mathematical programming] (p. 259). Reading, MA: Addison-Wesley.<br />
<br />
4. Chinneck, J. W. (2006). [https://www.optimization101.org/ Practical optimization: a gentle introduction. Systems and Computer Engineering]. Carleton University, Ottawa. 11.<br />
<br />
5. Roy, B. V. Mason, K.(2005). [https://web.stanford.edu/~ashishg/msande111/notes/chapter5.pdf/ Formulation and Analysis of Linear Programs, Chapter 5 Network Flows].<br />
<br />
6. Vanderbei, R. J. (2020). [https://www.springer.com/gp/book/9781461476306/ Linear programming: foundations and extensions (Vol. 285)]. Springer Nature.<br />
<br />
7. Sobel, J. (2014). [https://econweb.ucsd.edu/~jsobel/172aw02/notes8.pdf/ Linear Programming Notes VIII: The Transportation Problem].<br />
<br />
8. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 26.2: The Ford–Fulkerson method". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill.<br />
<br />
9. Jon Kleinberg; Éva Tardos (2006). "Chapter 7: Network Flow". Algorithm Design. Pearson Education.<br />
<br />
10. [https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm/ Ford–Fulkerson algorithm]. Retrieved December 05, 2020.<br />
<br />
11. Hu, G. (2020, November 19). [https://optimization.cbe.cornell.edu/index.php?title=Simplex_algorithm#cite_note-11/ Simplex algorithm]. Retrieved November 22, 2020.<br />
<br />
12. Altınel, İ. K., Aras, N., Şuvak, Z., & Taşkın, Z. C. (2019). [https://www.sciencedirect.com/science/article/pii/S0166218X18304815/ Minimum cost noncrossing flow problem on layered networks]. Discrete Applied Mathematics, 261, 2-21.<br />
<br />
13. Dewil, R., Vansteenwegen, P., Cattrysse, D., & Van Oudheusden, D. (2015). [https://core.ac.uk/download/pdf/34613916.pdf/ A minimum cost network flow model for the maximum covering and patrol routing problem]. European Journal of Operational Research, 247(1), 27-36.<br />
<br />
14. Lin, D. Y., & Chang, Y. T. (2018). [https://www.sciencedirect.com/science/article/pii/S1366554517308037/ Ship routing and freight assignment problem for liner shipping: Application to the Northern Sea Route planning problem]. Transportation Research Part E: Logistics and Transportation Review, 110, 47-70.<br />
<br />
15. Tu, Q., Cheng, L., Yuan, T., Cheng, Y., & Li, M. (2020). [https://www.sciencedirect.com/science/article/pii/S095965262031177X/ The Constrained Reliable Shortest Path Problem for Electric Vehicles in the Urban Transportation Network]. Journal of Cleaner Production, 121130.<br />
<br />
16. Guo, Y., Li, S., Jiang, W., Zhang, B., & Ma, Y. (2017). [https://dl.acm.org/doi/abs/10.1016/j.phycom.2017.06.010/ Learning automata-based algorithms for solving the stochastic shortest path routing problems in 5G wireless communication]. Physical Communication, 25, 376-385.<br />
<br />
17. Haddou, N. B., Ez-Zahraouy, H., & Rachadi, A. (2016). [https://www.infona.pl/resource/bwmeta1.element.elsevier-2eaa73bc-4e22-39aa-89b9-71ef2d7e2d63/ Implantation of the global dynamic routing scheme in scale-free networks under the shortest path strategy]. Physics Letters A, 380(33), 2513-2517.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=2020_Cornell_Optimization_Open_Textbook_Feedback&diff=26952020 Cornell Optimization Open Textbook Feedback2020-12-21T10:15:07Z<p>Wc593: /* Interior-point method for LP */</p>
<hr />
<div>==[[Computational complexity]]==<br />
<br />
* Numerical Example<br />
*# Finding subsets of a set is NOT O(2<sup>n</sup>).<br />
* Application<br />
*# The applications mentioned need to be discussed further.<br />
<br />
==[[Network flow problem]]==<br />
<br />
* Real Life Applications<br />
*# There is NO need to include code. Simply mention how the problem was coded along with details on the LP solver used.<br />
*# The subsection title style should be consistent. Subsection titles in Real Life Applications section are not in title case like the ones in Theory section.<br />
<br />
==[[Interior-point method for LP]]==<br />
<br />
* Introduction<br />
*# Please type “minimize” and “subject to” in formal optimization problem form throughout the whole page.<br />
* A section to discuss and/or illustrate the applications<br />
*# Please type optimization problem in the formal form.<br />
<br />
==[[Optimization with absolute values]]==<br />
<br />
* An introduction of the topic<br />
*# Add few sentences on how absolute values convert optimization problem into a nonlinear optimization problem<br />
* Applications<br />
*# Inline equations at the beginning of this section are not formatted properly. Please fix the notation for expected return throughout the section.<br />
<br />
==[[Matrix game (LP for game theory)]]==<br />
<br />
* Theory and Algorithmic Discussion<br />
*# aij are not defined in this section.<br />
<br />
==[[Quasi-Newton methods]]==<br />
<br />
* Theory and Algorithm<br />
*# Please ensure that few spaces are kept between the equations and equation numbers.<br />
<br />
== [[Markov decision process]] ==<br />
<br />
* Introduction<br />
*# Please fix typos such as “discreet”.<br />
* Theory and Methodology<br />
*# If abbreviations are defined like MDP, use the abbreviations throughout the Wiki<br />
<br />
==[[Eight step procedures]]==<br />
<br />
* Numerical Example<br />
*# Data for the example Knapsack problem (b,w) are missing.<br />
*# How to arrive at optimal solutions is missing.<br />
<br />
==[[Facility location problem]]==<br />
<br />
* Numerical Example<br />
*# Mention how the formulated problem is coded and solved. No need to provide GAMS code.<br />
<br />
==[[Set covering problem]]==<br />
<br />
* Integer linear program formulation & Approximation via LP relaxation and rounding<br />
*# Use proper math notations for “greater than equal to”.<br />
* Numerical Example<br />
*# Please leave some space between equation and equation number.<br />
<br />
==[[Quadratic assignment problem]]==<br />
<br />
* Theory, methodology, and/or algorithmic discussions<br />
*# Discuss dynamic programming and cutting plane solution techniques briefly.<br />
<br />
==[[Newsvendor problem]]==<br />
<br />
* Formulation<br />
*# A math programming formulation of the optimization problem with objective function and constraints is expected for the formulation. Please add any variant of the newsvendor problem along with some operational constraints.<br />
*# A mathematical presentation of the solution technique is expected. Please consider any distribution for R and present a solution technique for that specific problem. <br />
<br />
==[[Mixed-integer cuts]]==<br />
<br />
* Applications<br />
*# MILP and their solution techniques involving cuts are extremely versatile. Yet, only two sentences are added to describe their applications. Please discuss their applications, preferably real-world applications, in brief. Example Wikis provided on the website could be used as a reference to do so.<br />
<br />
==[[Column generation algorithms]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory, methodology and algorithmic discussions<br />
*# Some minor typos/article agreement issues exist “is not partical in real-world”.<br />
<br />
==[[Heuristic algorithms]]==<br />
<br />
* Methodology<br />
*# Please use proper symbol for "greater than or equal to".<br />
*# Greedy method to solve minimum spanning tree seems to be missing.<br />
<br />
==[[Branch and cut]]==<br />
<br />
* Methodology & Algorithm<br />
*# Equation in most infeasible branching section is not properly formatted.<br />
*# Step 2 appears abruptly in the algorithm and does not explain much. Please add more information regarding the same.<br />
*# Step 5 contains latex code terms that are not properly formatted. Please fix the same.<br />
*# Fix typos: e.g., repeated “for the current”.<br />
<br />
== [[Mixed-integer linear fractional programming (MILFP)]] ==<br />
<br />
* Application and Modeling for Numerical Examples<br />
*# Please check the index notation in Mass Balance Constraint<br />
<br />
==[[Fuzzy programming]]==<br />
<br />
* Applications<br />
*# Applications of fuzzy programming are quite versatile. Please discuss few of the mentioned applications briefly. The provided example Wikis can be used as a reference to write this section.<br />
<br />
==[[Adaptive robust optimization]]==<br />
<br />
* Problem Formulation<br />
*# Please check typos such as "Let ''u'' bee a vector".<br />
*# The abbreviation KKT is not previously defined.<br />
<br />
== [[Stochastic gradient descent]] ==<br />
* Numerical Example<br />
*# Amount of whitespace can be reduced by changing orientation of example dataset by converting it into a table containing 3 rows and 6 columns.<br />
<br />
==[[RMSProp]]==<br />
<br />
* Introduction<br />
*# References at the end of the sentence should be placed after the period.<br />
* Theory and Methodology<br />
*# Please check grammar in this section.<br />
* Applications and Discussion<br />
*# The applications section does not contain any discussion on applications. Please mention a few applications of the widely used RMSprop and discuss them briefly.<br />
<br />
==[[Adam]]==<br />
<br />
* Background<br />
*# References at the end of the sentence should be placed after the period.</div>Wc593https://optimization.cbe.cornell.edu/index.php?title=Interior-point_method_for_LP&diff=2694Interior-point method for LP2020-12-21T10:14:40Z<p>Wc593: </p>
<hr />
<div>Authors: Tomas Lopez Lauterio, Rohit Thakur and Sunil Shenoy (SysEn 6800 Fall 2020) <br><br />
<br />
== Introduction ==<br />
Linear programming problems seek to optimize linear functions given linear constraints. There are several applications of linear programming including inventory control, production scheduling, transportation optimization and efficient manufacturing processes. Simplex method has been a very popular method to solve these linear programming problems and has served these industries well for a long time. But over the past 40 years, there have been significant number of advances in different algorithms that can be used for solving these types of problems in more efficient ways, especially where the problems become very large scale in terms of variables and constraints.<ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <ref> "Linear Programming - Foundations and Extensions - 3<sup>rd</sup> edition''" by Robert J Vanderbei, ISBN-113: 978-0-387-74387-5. </ref> In early 1980s Karmarkar (1984) <ref> N Karmarkar, "A new Polynomial - Time algorithm for linear programming", Combinatorica, VOl. 4, No. 8, 1984, pp. 373-395.</ref> published a paper introducing interior point methods to solve linear-programming problems. A simple way to look at differences between simplex method and interior point method is that a simplex method moves along the edges of a polytope towards a vertex having a lower value of the cost function, whereas an interior point method begins its iterations inside the polytope and moves towards the lowest cost vertex without regard for edges. This approach reduces the number of iterations needed to reach that vertex, thereby reducing computational time needed to solve the problem.<br><br><br />
<br />
=== Lagrange Function ===<br />
Before getting too deep into description of Interior point method, there are a few concepts that are helpful to understand. First key concept to understand is related to Lagrange function. Lagrange function incorporates the constraints into a modified objective function in such a way that a constrained minimizer <math> (x^{*}) </math> is connected to an unconstrained minimizer <math> \left \{x^{*},\lambda ^{*} \right \} </math> for the augmented objective function <math> L\left ( x , \lambda \right ) </math>, where the augmentation is achieved with <math> 'p' </math> Lagrange multipliers. <ref> Computational Experience with Primal-Dual Interior Point Method for Linear Programming''" by Irvin Lustig, Roy Marsten, David Shanno </ref><ref> "Practical Optimization - Algorithms and Engineering Applications" by Andreas Antoniou and Wu-Sheng Lu, ISBN-10: 0-387-71106-6 </ref> <br><br />
To illustrate this point, consider a simple an optimization problem:<br><br />
minimize <math> f\left ( x \right ) </math><br><br />
subject to: <math> A \cdot x = b </math><br><br />
where, <math> A \, \in \, R^{p\, \times \, n} </math> is assumed to have a full row rank<br />
Lagrange function can be laid out as:<br><br />
<math>L(x, \lambda ) = f(x) + \sum_{i=1}^{p}\lambda _{i}\cdot a_{i}(x)</math> <br><br />
where, <math> '\lambda ' </math> introduced in this equation is called Lagrange Multiplier. <br><br><br />
=== Newton's Method ===<br />
Another key concept to understand is regarding solving linear and non-linear equations using Newton's methods. <br />
Assume an unconstrained minimization problem in the form: <br><br />
minimize <math> g\left ( x \right ) </math> , where <math> g\left ( x \right ) </math> is a real valued function with <math> 'n' </math> variables. <br><br />
A local minimum for this problem will satisfy the following system of equations:<br><br />
<math>\left [ \frac{\partial g(x)}{\partial x_{1}} ..... \frac{\partial g(x)}{\partial x_{n}}\right ]^{T} = \left [ 0 ... 0 \right ]</math> <br><br />
<br />
The Newton's iteration looks like: <br><br />
<math>x^{k+1} = x^{k} - \left [ \nabla ^{2} g(x^{k}) \right ]^{-1}\cdot \nabla g(x^{k})</math> <br><br />
<br><br />
<br />
== Theory and algorithm ==<br />
[[File:Visualization.png|685x685px|Visualization of Central Path method in Interior point|thumb]]<br />
<br />
Given a linear programming problem with constraint equations that have inequality terms, the inequality term is typically replaced with an equality term using slack variables. The new reformulation can be discontinuous in nature and to replace the discontinuous function with a smoother function, a logarithmic form of this reformulation is utilized. This nonlinear objective function is called "''Logarithmic Barrier Function''"<br />
The process involves starting with formation of a primal-dual pair of linear programs and then using "''Lagrangian function''" form on the "''Barrier function''" to convert the constrained problems into unconstrained problems. These unconstrained problems are then solved using Newton's method as shown above.<br><br />
<br />
=== Problem Formulation ===<br />
<br />
<br />
Consider a combination of primal-dual problem below:<br><br />
('''Primal Problem formulation''') <br><br />
→ minimize <math> c^{T}x </math> <br><br />
Subject to: <math> Ax = b </math> and <math> x \geq 0 </math> <br><br />
('''Dual Problem formulation''') <br><br />
→ maximize <math> b^{T}y </math> <br><br />
Subject to: <math> A^{T}y + \lambda = c </math> and <math> \lambda \geq 0 </math> <br><br />
<math> '\lambda ' </math> vector introduced represents the slack variables.<br><br />
<br />
The Lagrangian functional form is used to configure two equations using "''Logarithmic Barrier Function''" for both primal and dual forms mentioned above:<br><br />
Lagrangian equation for Primal using Logarithm Barrier Function : <math> L_{p}(x,y) = c^{T}\cdot x - \mu \cdot \sum_{j=1}^{n}log(x_{j}) - y^{T}\cdot (Ax - b) </math> <br><br />
Lagrangian equation for Dual using Logarithm Barrier Function : <math> L_{d}(x,y,\lambda ) = b^{T}\cdot y + \mu \cdot \sum_{j=1}^{n}log(\lambda _{j}) - x^{T}\cdot (A^{T}y +\lambda - c) </math> <br><br />
<br />
Taking the partial derivatives of L<sub>p</sub> and L<sub>d</sub> with respect to variables <math> 'x'\; '\lambda'\; 'y' </math>, and forcing these terms to zero, we get the following equations: <br><br />
<math> Ax = b </math> and <math> x \geq 0 </math> <br><br />
<math> A^{T}y + \lambda = c </math> and <math> \lambda \geq 0 </math> <br><br />
<math> x_{j}\cdot \lambda _{j} = \mu </math> for ''j''= 1,2,.......''n'' <br><br />
<br />
where, <math> '\mu ' </math> is strictly positive scaler parameter. For each <math> \mu > 0 </math> , the vectors in the set <math> \left \{ x\left ( \mu \right ), y\left ( \mu \right ) , \lambda \left ( \mu \right )\right \} </math> satisfying above equations, can we viewed as set of points in <math> R^{n} </math> , <math> R^{p} </math>, <math> R^{n} </math> respectively, such that when <math> '\mu ' </math> varies, the corresponding points form a set of trajectories called ''"Central Path"''. The central path lies in the ''"Interior"'' of the feasible regions. There is a sample illustration of ''"Central Path"'' method in figure to right. Starting with a positive value of <math> '\mu ' </math> and as <math> '\mu ' </math> approaches 0, the optimal point is reached. <br><br />
<br />
Let Diagonal[...] denote a diagonal matrix with the listed elements on its diagonal.<br />
Define the following:<br><br />
'''X''' = Diagonal [<math> x_{1}^{0}, .... x_{n}^{0} </math>]<br><br />
<math> \lambda </math> = Diagonal (<math> \lambda _{1}^{0}, .... \lambda _{n}^{0} </math> )<br><br />
'''e<sup>T</sup>''' = (1 .....1) as vector of all 1's.<br><br />
Using these newly defined terms, the equation above can be written as: <br><br />
<math> X\cdot \lambda \cdot e = \mu \cdot e </math> <br><br />
<br />
=== Iterations using Newton's Method ===<br />
Employing the Newton's iterative method to solve the following equations: <br><br />
<math> Ax - b = 0 </math> <br><br />
<math> A^{T}y + \lambda = c </math> <br><br />
<math> X\cdot \lambda \cdot e - \mu \cdot e = 0</math> <br><br />
With definition of starting point that lies within feasible region as <math> \left ( x^{0},y^{0},\lambda ^{0} \right ) </math> such that <math> x^{0}> 0 \, and \lambda ^{0}> 0 </math>.<br />
Also defining 2 residual vectors for both the primal and dual equations: <br><br />
<math> \delta _{p} = b - A\cdot x^{0} </math> <br><br />
<math> \delta _{d} = c - A^{0}\cdot y^{0} - \lambda ^{0} </math> <br><br />
<br />
Applying Newton's Method to solve above equations: <br><br />
<math> \begin{bmatrix}<br />
A & 0 & 0\\ <br />
0 & A^{T} & 1\\ <br />
\lambda & 0 & X<br />
\end{bmatrix} \cdot \begin{bmatrix}<br />
\delta _{x}\\ <br />
\delta _{y}\\ <br />
\delta _{\lambda }<br />
\end{bmatrix} = \begin{bmatrix}<br />
\delta _{p}\\ <br />
\delta _{d}\\ <br />
\mu \cdot e - X\cdot \lambda \cdot e<br />
\end{bmatrix}<br />
</math><br><br />
So a single iteration of Newton's method involves the following equations. For each iteration, we solve for the next value of <math> x^{k+1},y^{k+1},\lambda ^{k+1} </math>: <br><br />
<math> (A\lambda ^{-1}XA^{T})\delta _{y} = b- \mu A\lambda^{-1} + A\lambda ^{-1}X\delta _{d} </math> <br><br />
<math> \delta _{\lambda} = \delta _{d}\cdot A^{T}\delta _{y} </math> <br><br />
<math> \delta _{x} = \lambda ^{-1}\left [ \mu \cdot e - X\lambda e -\lambda \delta _{z}\right ] </math> <br><br />
<math> \alpha _{p} = min\left \{ \frac{-x_{j}}{\delta _{x_{j}}} \right \} </math> for <math> \delta x_{j} < 0 </math> <br><br />
<math> \alpha _{d} = min\left \{ \frac{-\lambda_{j}}{\delta _{\lambda_{j}}} \right \} </math> for <math> \delta \lambda_{j} < 0 </math> <br><br><br />
<br />
The value of the the following variables for next iteration (+1) is determined by: <br><br />
<math> x^{k+1} = x^{k} + \alpha _{p}\cdot \delta _{x} </math> <br><br />
<math> y^{k+1} = y^{k} + \alpha _{d}\cdot \delta _{y} </math> <br><br />
<math> \lambda^{k+1} = \lambda^{k} + \alpha _{d}\cdot \delta _{\lambda} </math> <br><br />
<br />
The quantities <math> \alpha _{p} </math> and <math> \alpha _{d} </math> are positive with <math> 0\leq \alpha _{p},\alpha _{d}\leq 1 </math>. <br><br />
After each iteration of Newton's method, we assess the duality gap that is given by the expression below and compare it against a small value <big>ε</big> <br><br />
<math> \frac{c^{T}x^{k}-b^{T}y^{k}}{1+\left | b^{T}y^{k} \right |} \leq \varepsilon </math> <br><br />
The value of <big>ε</big> can be chosen to be something small 10<sup>-6</sup>, which essentially is the permissible duality gap for the problem. <br><br />
<br />
== Numerical Example ==<br />
<br />
Maximize<br><br />
<math> 3X_{1} + 3X_{2} </math><br><br />
<br />
such that<br><br />
<math> X_{1} + X_{2} \leq 4, </math><br> <br />
<math> X_{1} \geq 0, </math><br><br />
<math> X_{2} \geq 0, </math><br><br />
<br />
Barrier form of the above primal problem is as written below:<br />
<br />
<br />
<math> P(X,\mu) = 3X_{1} + 3X_{2} + \mu.log(4-X_{1} - X_{2}) + \mu.log(X_{1}) + \mu.log(X_{2})</math><br> <br />
<br />
<br />
The Barrier function is always concave, since the problem is a maximization problem, there will be one and only one solution. In order to find the maximum point on the concave function we take a derivate and set it to zero. <br />
<br />
Taking partial derivative and setting to zero, we get the below equations<br />
<br />
<br />
<math> \frac{\partial P(X,\mu)}{\partial X_{1}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{1}} = 0</math> <br><br />
<br />
<math> \frac{\partial P(X,\mu)}{\partial X_{2}} = 3 - \frac{\mu}{(4-X_{1}-X_{2})} + \frac{\mu}{X_{2}} = 0</math> <br><br />
<br />
Using above equations the following can be derived: <math> X_{1} = X_{2}</math> <br><br />
<br />
Hence the following can be concluded<br />
<br />
<math> 3 - \frac{\mu}{(4-2X_{1})} + \frac{\mu}{X_{1}} = 0 </math><br><br />
<br />
<br />
The above equation can be converted into a quadratic equation as below:<br />
<br />
<math> 6X_{1}^2 - 3X_{1}(4-\mu)-4\mu = 0</math><br><br />
<br />
<br />
The solution to the above quadratic equation can be written as below:<br />
<br />
<math> X_{1} = \frac{3(4-\mu)\pm(\sqrt{9(4-\mu)^2 + 96\mu} }{12} = X_{2}</math><br><br />
<br />
<br />
Taking only take the positive value of <math> X_{1} </math> and <math> X_{2} </math> from the above equation as <math> X_{1} \geq 0 </math> and <math> X_{2} \geq 0</math> we can solve <math>X_{1}</math> and <math>X_{2}</math> for different values of <math>\mu</math>. The outcome of such iterations is listed in the table below. <br />
<br />
{| class="wikitable"<br />
|+ Objective & Barrier Function w.r.t <math>X_{1}</math>, <math>X_{2}</math> and <math>\mu</math><br />
|-<br />
! <math>\mu</math> !! <math>X_{1}</math> !! <math>X_{2}</math> !! <math>P(X, \mu)</math> !! <math>f(x)</math><br />
|-<br />
| 0 || 2 || 2 || 12 || 12<br />
|-<br />
| 0.01 || 1.998 || 1.998 || 11.947 || 11.990 <br />
|-<br />
| 0.1 || 1.984 || 1.984 || 11.697 || 11.902 <br />
|-<br />
| 1 || 1.859 || 1.859 || 11.128 || 11.152 <br />
|-<br />
| 10 || 1.486 || 1.486 || 17.114 || 8.916 <br />
|-<br />
| 100 || 1.351 || 1.351 || 94.357 || 8.105 <br />
|-<br />
| 1000 || 1.335 || 1.335 || 871.052 || 8.011 <br />
|}<br />
<br />
From the above table it can be seen that: <br />
# as <math>\mu</math> gets close to zero, the Barrier Function becomes tight and close to the original function. <br />
# at <math>\mu=0</math> the optimal solution is achieved.<br />
<br />
<br />
Summary:<br />
Maximum Value of Objective function <math>=12</math> <br><br />
Optimal points <math>X_{1} = 2 </math> and <math>X_{2} = 2</math><br />
<br />
The Newton's Method can also be applied to solve linear programming problems as indicated in the "Theory and Algorithm" section above. The solution to linear programming problems as indicated in this section "Numerical Example", will be similar to quadratic equation as obtained above and will converge in one iteration.<br />
<br />
== Applications ==<br />
Primal-Dual interior-point (PDIP) methods are commonly used in optimal power flow (OPF), in this case what is being looked is to maximize user utility and minimize operational cost satisfying operational and physical constraints. The solution to the OPF needs to be available to grid operators in few minutes or seconds due to changes and fluctuations in loads during power generation. Newton-based primal-dual interior point can achieve fast convergence in this OPF optimization problem. <ref> A. Minot, Y. M. Lu and N. Li, "A parallel primal-dual interior-point method for DC optimal power flow," 2016 Power Systems Computation Conference (PSCC), Genoa, 2016, pp. 1-7, doi: 10.1109/PSCC.2016.7540826. </ref><br />
<br />
Another application of the PDIP is for the minimization of losses and cost in the generation and transmission in hydroelectric power systems. <ref> L. M. Ramos Carvalho and A. R. Leite Oliveira, "Primal-Dual Interior Point Method Applied to the Short Term Hydroelectric Scheduling Including a Perturbing Parameter," in IEEE Latin America Transactions, vol. 7, no. 5, pp. 533-538, Sept. 2009, doi: 10.1109/TLA.2009.5361190. </ref> <br />
<br />
PDIP are commonly used in imaging processing. One these applications is for image deblurring, in this case the constrained deblurring problem is formulated as primal-dual. The constrained primal-dual is solved using a semi-smooth Newton’s method. <ref> D. Krishnan, P. Lin and A. M. Yip, "A Primal-Dual Active-Set Method for Non-Negativity Constrained Total Variation Deblurring Problems," in IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2766-2777, Nov. 2007, doi: 10.1109/TIP.2007.908079. </ref><br />
<br />
PDIP can be utilized to obtain a general formula for a shape derivative of the potential energy describing the energy release rate for curvilinear cracks. Problems on cracks and their evolution have important application in engineering and mechanical sciences. <ref> V. A. Kovtunenko, Primal–dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration, IMA Journal of Applied Mathematics, Volume 71, Issue 5, October 2006, Pages 635–657 </ref><br />
<br />
== Conclusion ==<br />
<br />
The primal-dual interior point method is a good alternative to the simplex methods for solving linear programming problems. The primal dual method shows superior performance and convergence on many large complex problems. simplex codes are faster on small to medium problems, interior point primal-dual are much faster on large problems.<br />
<br />
<br />
<br />
== References ==<br />
<references /></div>Wc593