MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "Signomial_problems",
        "continue": "gapcontinue||"
    },
    "warnings": {
        "main": {
            "*": "Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/postorius/lists/mediawiki-api-announce.lists.wikimedia.org/> for notice of API deprecations and breaking changes."
        },
        "revisions": {
            "*": "Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."
        }
    },
    "query": {
        "pages": {
            "1284": {
                "pageid": 1284,
                "ns": 0,
                "title": "Sequential quadratic programming",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "This web page is a duplicate of https://optimization.mccormick.northwestern.edu/index.php/Sequential_quadratic_programming\n\nAuthored by: Ben Goodman (ChE 345 Spring 2016) <br/>\nSteward: Dajun Yue and Fenqi You<br/>\n<br/>\n\n\n=Introduction=\nSequential quadratic programming (SQP) is a class of algorithms for solving non-linear optimization problems (NLP) in the real world. It is powerful enough for real problems because it can handle any degree of non-linearity including non-linearity in the constraints. The main disadvantage is that the method incorporates several derivatives, which likely need to be worked analytically in advance of iterating to a solution, so SQP becomes quite cumbersome for large problems with many variables or constraints. The method dates back to 1963 and was developed and refined in the 1970's .[1] SQP combines two fundamental algorithms for solving non-linear optimization problems: an active set method and Newton\u2019s method, both of which are explained briefly below. Previous exposure to the component methods as well as to Lagrangian multipliers and Karush-Kuhn-Tucker (KKT) conditions is helpful in understanding SQP. The abstracted, general problem below will be used for the remainder of this page to explain and discuss SQP:<br/>\n<math> \\text{min f(x)} </math> <br/>\n<math> \\text{s.t. h(x) = 0} </math> <br/>\n<math> \\text{and g(x)} \\le 0 </math> <br/>\n<br/>\nwith f(x), h(x), and g(x) each potentially non-linear. <math>x</math> is potentially a vector of many variables for the optimization, in which case h(x) and g(x) are systems.<br/>\n<br/>\n\n\n=Background: Prerequisite Methods=\n==Karush-Kuhn-Tucker (KKT) Conditions and the Lagrangian Function==\nThe Lagrangian function combines all the information about the problem into one function using Lagrangian multipliers <math>\\lambda</math> for equality constraints and <math>\\mu</math> for inequality constraints:\n<math>L(x,\\lambda,\\mu)</math><math>\\text{ = f(x) +}</math><math>\\sum_i</math><math>\\lambda_i h_i(x)+</math><math>\\sum_i</math><math>\\mu_i g_i(x)</math> <br/>\n<br/>\nA single function can be optimized by finding critical points where the gradient is zero. This procedure now includes <math>\\lambda</math> and <math>\\mu</math> as variables (which are vectors for multi-constraint NLP). The system formed from this gradient is given the label KKT conditions: <br/>\n<br/>\n<math>\\nabla L =</math><math>\\begin{bmatrix} \\frac{dL}{dx} \\\\ \\frac{dL}{d\\lambda} \\\\ \\frac{dL}{d\\mu} \\end{bmatrix} =</math>\n<math>\\begin{bmatrix} \\nabla f + \\lambda \\nabla h + \\mu \\nabla g^* \\\\ h \\\\ g^* \\end{bmatrix} </math><math>=0</math> <br/>\n<br/>\nThe second KKT condition is merely feasibility; h(x) were constrained to zero in the original NLP. The third KKT condition is a bit trickier in that only the set of active inequality constraints need satisfy this equality, the active set being denoted by <math>g^*</math>. Inequality constraints that are nowhere near the optimal solution are inconsequential, but constraints that actively participate in determining the optimal solution will be at their limit of zero, and thus the third KKT condition holds. Ultimately, the Lagrangian multipliers describe the change in the objective function with respect to a change in a constraint, so <math>\\mu</math> is zero for inactive constraints, so those inactive constraints can be considered removed from the Lagrangian function before the gradient is even taken. <br/>\n<br/>\n\n==The Active Set Method and its Limitations==\nThe active set method solves the KKT conditions using guess and check to find critical points. Guessing that every inequality constraints is inactive is conventionally the first step. After solving the remaining system for <math>x</math>, feasibility can be checked. If any constraints are violated, they should be considered active in the next iteration, and if any multipliers are found to be negative, their constraints should be considered inactive in the next iteration. Efficient convergence and potentially large systems of equations are of some concern, but the main limitation of the active set method is that many of the derivative expressions in the KKT conditions could still be highly non-linear and thus difficult to solve. Indeed, only quadratic problems seem reasonable to tackle with the active set method because the KKT conditions are linear. Sequential Quadratic Programming addresses this key limitation by incorporating a means of handling highly non-linear functions: Newton's Method.<br/>\n<br/>\n\n==Newton's Method==\nThe main idea behind Newton's Method is to improve a guess in proportion to how quickly the function is changing at the guess and inversely proportional to how the function is accelerating at the guess. Walking through a few extreme scenarios makes this approach more intuitive: a long, steep incline in a function will not be close to a critical point, so the improvement should be large, and a shallow incline that is rapidly expiring is likely to be near a critical point, so the improvement should be small. The iterations converge to critical values of any function <math>f</math> with improvement steps that follow the form below: <br/>\n<math>x_{k+1} =</math> <math> x_k - </math> <math>\\frac{\\nabla f}{\\nabla^2 f} </math> <br/>\n<br/>\nThe negative sign is important. Near minimums, a positive gradient should decrease the guess and vice versa, and the divergence is positive. Near maximums, a positive gradient should increase the guess and vice versa, but the divergence is negative. This sign convention also prevents the algorithm from escaping a single convex or concave region; the improvement will reverse direction if it overshoots. This is an important consideration in non-convex problems with multiple local maximums and minimums. Newton's method will find the critical point closest to the original guess. Incorporating Newton's Method into the active set method will transform the iteration above into a matrix equation. <br/>\n<br/>\n\n\n=The SQP Algorithm=   \nCritical points of the objective function will also be critical points of the Lagrangian function and vice versa because the Lagrangian function is equal to the objective function at a KKT point; all constraints are either equal to zero or inactive. The algorithm is thus simply iterating Newton's method to find critical points of the Lagrangian function. Since the Lagrangian multipliers are additional variables, the iteration forms a system:<br/>\n<math> \\begin{bmatrix} x_{k+1} \\\\ \\lambda_{k+1} \\\\ \\mu_{k+1} \\end{bmatrix} =</math>\n<math> \\begin{bmatrix} x_{k} \\\\ \\lambda_{k} \\\\ \\mu_{k} \\end{bmatrix} -</math><math> (\\nabla^2 L_k)^{-1} \\nabla L_k</math> <br/>\n<br/>\n\nRecall: <math>\\nabla L =</math><math>\\begin{bmatrix} \\frac{dL}{dx} \\\\ \\frac{dL}{d\\lambda} \\\\ \\frac{dL}{d\\mu} \\end{bmatrix} =</math>\n<math>\\begin{bmatrix} \\nabla f + \\lambda \\nabla h + \\mu \\nabla g^* \\\\ h \\\\ g^* \\end{bmatrix} </math> <br/>\n<br/>\nThen <math>\\nabla^2 L = </math><math>\\begin{bmatrix} \\nabla_{xx}^2 L & \\nabla h & \\nabla g \\\\ \\nabla h & 0 & 0 \\\\ \\nabla g & 0 & 0 \\end{bmatrix} </math> <br/>\n<br/>\nUnlike the active set method, the need to ever solve a system of non-linear equations has been entirely eliminated in SQP, no matter how non-linear the objective and constraints. Theoretically, If the derivative expressions above can be formulated analytically then coded, software could iterate very quickly because the system doesn't change. In practice, however, it is likely that the divergence will not be an invertible matrix because variables are likely to be linearly bound from above and below. The improvement direction \"p\" for the Newton's Method iterations is thus typically found in a more indirect fashion: with a quadratic minimization sub-problem that is solved using quadratic algorithms. The subproblem is derived as follows:<br/>\n<br/>\n<math> p = \\frac{\\nabla L}{\\nabla^2 L} = \\frac{(\\nabla L)p}{(\\nabla^2 L)p}</math> <br/>\n<br/>\nSince p is an incremental change to the objective function, this equation then resembles a two-term Taylor Series for the derivative of the objective function, which shows that a Taylor expansion with the increment p as a variable is equivalent to a Newton iteration. Decomposing the different equations within this system and cutting the second order term in half to match Taylor Series concepts, a minimization sub-problem can be obtained. This problem is quadratic and thus must be solved with non-linear methods, which once again introduces the need to solve a non-linear problem into the algorithm, but this predictable sub-problem with one variable is much easier to tackle than the parent problem. <br/>\n<math>\\text{min(p)   } f_k(x) + \\nabla f_k^T p + \\frac{1}{2}p^T\\nabla_{xx}^2L_k p</math> <br/>\n<math>\\text{s.t.     }  \\nabla h_k p + h_k = 0</math> <br/>\n<math>\\text{ and     } \\nabla g_k p + g_k = 0</math> <br/>\n<br/>\n\n==Example Problem==\n[[File:Wiki_Inspection.jpg|frame|Figure 1: Solution of Example Problem by Inspection.<span style=\"font-size: 12pt; position:relative; bottom: 0.3em;\">10</span>]]\n\n<math> \\text{ Max Z = sin(u)cos(v) + cos(u)sin(v)}</math> <br/>\n<math> \\text{ s.t.  } 0 \\le \\text{(u+v)} \\le \\pi</math> <br/>\n<math> \\text{       } u = v^3 </math> <br/>\n<br/>\n\nThis example problem was chosen for being highly non linear but also easy to solve by inspection as a reference. The objective function Z is a trigonometric identity: <br/>\n<math> \\text{ sin(u)cos(v) + cos(u)sin(v) = sin(u+v)}</math> <br/>\n<br/>\nThe first constraint then just restricts the feasible zone to the first half of a period of the sine function, making the problem convex. The maximum of the sine function within this region occurs at <math>\\frac{\\pi}{2}</math>, as shown in Figure 1. The last constraint then makes the problem easy to solve algebraically: <br/>\n<math> u + v = \\frac{\\pi}{2}</math> <br/>\n<math> u = v^3 </math> <br/>\n<math>v^3 + v = \\frac{\\pi}{2}</math> <br/>\n<math>v = 0.883</math> and <math>u = v^3 = 0.688</math> <br/>\n<br/>\nNow, the problem will be solved using the sequential quadratic programming algorithm. The Lagrangian funtion with its gradient and divergence are as follows: <br/>\n<math> L = sin(u)cos(v) + cos(u)sin(v) + \\lambda_1 (u - v^3) + \\mu_1 ( -u - v) + \\mu_2 (u + v - \\pi)</math> <br/>\n<br/>\n\n<math>\\nabla L =</math><math>\\begin{bmatrix} \\frac{dL}{du} \\\\ \\frac{dL}{dv} \\\\ \\frac{dL}{d\\lambda_1} \\\\ \\frac{dL}{d\\mu_1} \\\\ \\frac{dL}{d\\mu_2} \\end{bmatrix} =</math>\n<math>\\begin{bmatrix} cos(v)cos(u) - sin(u)sin(v) + \\lambda_1 - \\mu_1 + \\mu_2 \\\\ cos(v)cos(u) - sin(u)sin(v) - 3\\lambda_1v^2 - \\mu_1 + \\mu_2 \\\\ u - v^3 \\\\ -u - v \\\\ u + v - \\pi \\end{bmatrix} </math> <br/>\n<br/>\n\n<math>\\nabla^2 L = </math><math>\\begin{bmatrix} -Z & -Z & 1 & -1 & 1 \\\\ -Z & -(Z + 6\\lambda_1v) & -3v^2 & -1 & 1 \\\\ 1 & -3v^2 & 0 & 0 & 0 \\\\ -1 & -1 & 0 & 0 & 0 \\\\ 1 & 1 & 0 & 0 & 0 \\end{bmatrix} </math> \nwith <math>Z = sin(u)cos(v) + cos(u)sin(v)</math><br/>\n<br/>\n\n[[File:Code_for_Wiki2.JPG|frame|Figure 2: MATLAB program for performing sequential Newton steps on quadratic subproblem.<span style=\"font-size: 12pt; position:relative; bottom: 0.3em;\">10</span>]]\n\nThe first important limitation in using SQP is now apparent: the divergence matrix is not invertible because it is not full rank. We will switch to the quadratic minimization sub-problem above, but even with this alternate framework, the gradient of the constraints must be full rank. This can be handled for now by artificially constraining the problem a bit further so that the derivatives of the inequality constraints are not linearly dependent. This can be accomplished through a small modification to the <math>\\mu_2</math> constraint. <br/>\nIf      <math> u + v \\le \\pi</math>,         then it is also true that        <math> u + v + exp(-7u) \\le \\pi</math>   <br/>\n<br/>\nThe addition to the left-hand-side is relatively close to zero for the range of possible values of <math>u</math>, so the feasible region has not changed much. This complication is certainly annoying, however, for a problem that's easily solved by inspection. It illustrates that SQP is truly best for problems with highly non-linear objectives and constraints. Now, the problem is ready to be solved. The MATLAB code in figure two was implemented, using the function fmincon to solve the minimization subproblems. fmincon is itself an SQP piece of software. In each step, the incumbent guess is plugged into the gradient, hessian, and constraint arrays, which then become parameters for the minimization problem. <br/>\n<br/>\n\n=Conclusion=\nSQP is powerful enough to be used in commercial software but also burdened by some intricacy. In addition to the complication from needing full-rank constraint gradients, the divergence matrix can be very difficult or laborious to assemble analytically. Commercial SQP packages include checks for the feasibility of the sub-problem in order to account for rank deficiencies. In addition to fmincon, SNOPT and FILTERSQP are two other commercial SQP packages, and each uses a different non-linear method to solve the quadratic subproblem.[1] [https://optimization.mccormick.northwestern.edu/index.php/Line_search_methods Line search methods] and [https://optimization.mccormick.northwestern.edu/index.php/Trust-region_methods trust-region methods] are trusted options for this step, and sub-gradient methods have also been proposed. The other common modification to SQP (ubiquitous to commercial packages) is to invoke [https://optimization.mccormick.northwestern.edu/index.php/Quasi-Newton_methods quasi-Newton methods] in order to avoid computing the Hessian entirely. SQP is thus very much a family of algorithms rather than a stand-alone tool for optimization. At its core, it is a method for turning large, very non-linear problems into a sequence of small quadratic problems to reduce the computational expense of the problem.\n\n=Sources=\n[1] Nocedal, J. and Wright, S. Numerical Optimization, 2nd. ed., Ch. 18. Springer, 2006. <br/>\n[2] You, Fengqi. Lecture Notes, Chemical Engineering 345 Optimization. Northwestern University, 2015. <br/>"
                    }
                ]
            },
            "796": {
                "pageid": 796,
                "ns": 0,
                "title": "Set covering problem",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud (ChemE 6800 Fall 2020)\n\n== Introduction ==\n\nThe set covering problem is a significant NP-hard problem in combinatorial optimization. Given a collection of elements, the set covering problem aims to find the minimum number of sets that incorporate (cover) all of these elements. <ref name=\"one\"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 \"Computational experience with approximation algorithms for the set covering problem],\" ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref>\n\nThe set covering problem importance has two main aspects: one is pedagogical, and the other is practical. \n\nFirst, because many greedy approximation methods have been proposed for this combinatorial problem, studying it gives insight into the use of approximation algorithms in solving NP-hard problems. Thus, it is a primal example in teaching computational algorithms. We present a preview of these methods in a later section, and we refer the interested reader to these references for a deeper discussion. <ref name=\"one\" /> <ref name=\"seven\"> P. Slav\u0131\u0301k, [https://www.sciencedirect.com/science/article/abs/pii/S0196677497908877 \"A Tight Analysis of the Greedy Algorithm for Set Cover],\" ''Journal of Algorithms,'', vol. 25, pp. 237-245, 1997. </ref> <ref name=\"nine\"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 \"What Is the Best Greedy-like Heuristic for the Weighted Set Covering Problem?],\" ''Operations Research Letters'', vol. 44, pp. 366-369, 2016. </ref>\n\nSecond, many problems in different industries can be formulated as set covering problems. For example, scheduling machines to perform certain jobs can be thought of as covering the jobs. Picking the optimal location for a cell tower so that it covers the maximum number of customers is another set covering application. Moreover, this problem has many applications in the airline industry, and it was explored on an industrial scale as early as the 1970s. <ref name=\"two\"> J. Rubin, [https://www.jstor.org/stable/25767684?seq=1 \"A Technique for the Solution of Massive Set Covering Problems, with Application to Airline Crew Scheduling],\" ''Transportation Science'', vol. 7, pp. 34-48, 1973. </ref>\n\n== Problem formulation ==\nIn the set covering problem, two sets are given: a set <math> U </math> of elements and a set <math> S </math> of subsets of the set <math> U </math>. Each subset in <math> S </math> is associated with a predetermined cost, and the union of all the subsets covers the set <math> U </math>. This combinatorial problem then concerns finding the optimal number of subsets whose union covers the universal set while minimizing the total cost.<ref name=\"one\"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 \"Computational experience with approximation algorithms for the set covering problem],\" ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name=\"twelve\">  Williamson, David P., and David B. Shmoys. \u201cThe Design of Approximation Algorithms\u201d [https://www.designofapproxalgs.com/book.pdf]. \u201cCambridge University Press\u201d, 2011. </ref>\n\nThe mathematical formulation of the set covering problem is define as follows. We define <math> U </math> = { <math> u_i,..., u_m </math>} as the universe of elements and <math> S </math> = { <math> s_i,..., s_n </math>} as a collection of subsets such that <math> s_i \\subset U </math> and the union of <math> s_i</math> covers all elements in <math> U </math> (i.e. <math>\\cup</math><math> s_i</math> = <math> U </math> ). Addionally, each set <math> s_i</math> must cover at least one element of <math> U </math> and has associated cost <math> c_i</math> such that <math> c_i > 0</math>. The objective is to find the minimum cost sub-collection of sets <math> X </math> <math>\\subset</math> <math> S </math> that covers all the elements in the universe <math> U </math>.\n\n== Integer linear program formulation ==\nAn integer linear program (ILP) model can be formulated for the minimum set covering problem as follows:<ref name=\"one\"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 \"Computational experience with approximation algorithms for the set covering problem],\" ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref>\n\n'''Decision variables'''\n\n<math> y_i = \\begin{cases} 1, & \\text{if subset }i\\text{ is selected} \\\\ 0, & \\text{otherwise } \\end{cases}</math>\n\n'''Objective function'''\n\nminimize <math>\\sum_{i=1}^n c_i y_i</math> \n\n'''Constraints '''\n\n<math> \\sum_{i=1}^n y_i \\geq 1, \\forall i= 1,....,m</math>  \n\n<math> y_i \\in \\{0, 1\\}, \\forall i = 1,....,n</math>  \n\nThe objective function <math>\\sum_{i=1}^n c_i y_i</math> is defined to minimize the number of subset <math> s_i</math> that cover all elements in the universe by minimizing their total cost. The first constraint implies that every element <math> i </math> in the universe <math> U </math> must be be covered and the second constraint <math> y_i \\in \\{0, 1\\} </math> indicates that the decision variables are binary which means that every set is either in the set cover or not.\n\nSet covering problems are significant NP-hard optimization problems, which implies that as the size of the problem increases, the computational time to solve it increases exponentially. Therefore, there exist approximation algorithms that can solve large scale problems in polynomial time with optimal or near-optimal solutions. In subsequent sections, we will cover two of the most widely used approximation methods to solve set cover problem in polynomial time which are linear program relaxation methods and classical greedy algorithms. <ref name=\"seven\" />\n\n== Approximation via LP relaxation and rounding ==\nSet covering is a classical integer programming problem and solving integer program in general is NP-hard. Therefore, one approach to achieve an <math> O</math>(log<math>n</math>) approximation to set covering problem in polynomial time is solving via linear programming (LP) relaxation algorithms <ref name=\"one\"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 \"Computational experience with approximation algorithms for the set covering problem],\" ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name=\"twelve\">  Williamson, David P., and David B. Shmoys. \u201cThe Design of Approximation Algorithms\u201d [https://www.designofapproxalgs.com/book.pdf]. \u201cCambridge University Press\u201d, 2011. </ref>. In LP relaxation, we relax the integrality requirement into a linear constraints. For instance, if we replace the constraints <math> y_i \\in \\{0, 1\\}</math> with the constraints <math> 0 \\leq y_i \\leq 1 </math>, we obtain the following LP problem that can be solved in polynomial time:\n\nminimize  <math>\\sum_{i=1}^n c_i y_i</math> \n\nsubject to <math> \\sum_{i=1}^n y_i \\geq 1, \\forall i= 1,....,m</math>  \n\n<math> 0 \\leq y_i\\leq 1,   \\forall i = 1,....,n</math>\n\nThe above LP formulation is a relaxation of the original ILP set cover problem. This means that every feasible solution of the integer program is also feasible for this LP program. Additionally, the value of any feasible solution for the integer program is the same value in LP since the objective functions of both integer and linear programs are the same. Solving the LP program will result in an optimal solution that is a lower bound for the original integer program since the minimization of LP finds a feasible solution of lowest possible values. Moreover, we use LP rounding algorithms to directly round the fractional LP solution to an integral combinatorial solution as follows:\n<br>\n\n\n'''Deterministic rounding algorithm''' \n<br>\n\nSuppose we have an optimal solution <math> z^* </math> for the linear programming relaxation of the set cover problem.  We round the fractional solution <math> z^* </math> to an integer solution <math> z </math> using LP rounding algorithm. In general, there are two approaches for rounding algorithms, deterministic and randomized rounding algorithm. In this section, we will explain the deterministic algorithms. In this approach, we include subset <math> s_i </math> in our solution if <math> z^* \\geq 1/d </math>, where <math> d </math> is the maximum number of sets in which any element appears. In practice, we set <math> z </math> to be as follows:<ref name=\"twelve\">  Williamson, David P., and David B. Shmoys. \u201cThe Design of Approximation Algorithms\u201d [https://www.designofapproxalgs.com/book.pdf]. \u201cCambridge University Press\u201d, 2011. </ref>\n\n<math> z = \\begin{cases} 1, & \\text{if } z^*\\geq 1/d \\\\ 0, & \\text{otherwise } \\end{cases}</math>\n\nThe rounding algorithm is an approximation algorithm for the set cover problem. It is clear that the algorithm converge in polynomial time and <math> z </math> is a feasible solution to the integer program.\n\n== Greedy approximation algorithm ==\nGreedy algorithms can be used to approximate for optimal or near-optimal solutions for large scale set covering instances in polynomial solvable time. <ref name=\"seven\" /> <ref name=\"nine\" /> The greedy heuristics applies iterative process that, at each stage, select the largest number of uncovered elements in the universe <math> U </math>, and delete the uncovered elements, until all elements are covered. <ref name=\"ten\"> V. Chvatal, [https://pubsonline.informs.org/doi/abs/10.1287/moor.4.3.233 \"Greedy Heuristic for the Set-Covering Problem],\" ''Mathematics of Operations Research'', vol. 4, pp. 233-235, 1979. </ref> Let <math> T </math> be the set that contain the covered elements, and <math> U </math> be the set that contain the elements of <math> Y </math> that still uncovered. At the beginning of the iteration, <math> T </math> is empty and all elements  <math> Y \\in U </math>. We iteratively select the set of <math> S </math> that covers the largest number of elements in <math> U </math> and add it to the covered elements in <math> T </math>. An example of this algorithm is presented below. \n\n'''Greedy algorithm for minimum  set cover example: '''\n\nStep 0: <math> \\quad  </math> <math> T \\in \\Phi </math>         <math> \\quad \\quad \\quad \\quad \\quad  </math>    { <math> T </math> stores the covered elements }\n\nStep 1: <math> \\quad  </math> '''While''' <math> U \\neq \\Phi </math> '''do:'''            <math> \\quad   </math>    { <math> U </math> stores the uncovered elements <math> Y </math>}\n\nStep 2:       <math> \\quad \\quad \\quad </math>  select <math> s_i \\in S </math> that covers the highest number of elements in <math> U </math>\n\nStep 3:      <math> \\quad \\quad \\quad </math>   add <math> s_i </math> to <math> T </math>\n\nStep 4:      <math> \\quad \\quad \\quad </math>   remove <math> s_i </math> from <math> U </math>\n\nStep 5: <math> \\quad  </math> '''End while'''         \n\nStep 6: <math> \\quad  </math> '''Return''' <math> S </math>\n\n==Numerical Example==\nLet\u2019s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8.\n\nWe are given that camera location 1 covers stadium areas {1,3,4,6,7}, camera location 2 covers stadium areas {4,7,8,12}, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below:\n{| class=\"wikitable\"\n|+Table 1 Camera Location vs Stadium Area\n|-\n!camera Location\n|1\n|2\n|3\n|4\n|5\n|6\n|7\n|8\n|-\n!stadium area\n|1,3,4,6,7\n|4,7,8,12\n|2,5,9,11,13\n|1,2,14,15\n|3,6,10,12,14\n|8,14,15\n|1,2,6,11\n|1,2,4,6,8,12\n|}\n\nWe can then represent the above information using binary values. If the stadium area <math>i</math> can be covered with camera location <math>j</math>, then we have <math>y_{ij} = 1</math>. If not,<math>y_{ij} = 0</math>. For instance, stadium area 1 is covered by camera location 1, so <math>y_{11} = 1</math>, while stadium area 1 is not covered by camera location 2, so <math>y_{12} = 0</math>. The binary variables <math>y_{ij}</math> values are given in the table below: \n{| class=\"wikitable\"\n|+Table 2 Binary Table (All Camera Locations and Stadium Areas)\n!\n!Camera1\n!Camera2\n!Camera3\n!Camera4\n!Camera5\n!Camera6\n!Camera7\n!Camera8\n|-\n!Stadium1\n|1\n|\n|\n|1\n|\n|\n|1\n|1\n|-\n!Stadium2\n|\n|\n|1\n|1\n|\n|\n|1\n|1\n|-\n!Stadium3\n|1\n|\n|\n|\n|1\n|\n|\n|\n|-\n!Stadium4\n|1\n|1\n|\n|\n|\n|\n|\n|1\n|-\n!Stadium5\n|\n|\n|1\n|\n|\n|\n|\n|\n|-\n!Stadium6\n|1\n|\n|\n|\n|1\n|\n|1\n|1\n|-\n!Stadium7\n|1\n|1\n|\n|\n|\n|\n|\n|\n|-\n!Stadium8\n|\n|1\n|\n|\n|\n|1\n|\n|1\n|-\n!Stadium9\n|\n|\n|1\n|\n|\n|\n|\n|\n|-\n!Stadium10\n|\n|\n|\n|\n|1\n|\n|\n|\n|-\n!Stadium11\n|\n|\n|1\n|\n|\n|\n|1\n|\n|-\n!Stadium12\n|\n|1\n|\n|\n|1\n|\n|\n|1\n|-\n!Stadium13\n|\n|\n|1\n|\n|\n|\n|\n|\n|-\n!Stadium14\n|\n|\n|\n|1\n|1\n|1\n|\n|\n|-\n!Stadium15\n|\n|\n|\n|1\n|\n|1\n|\n|\n|}\n\n\n\nWe introduce another binary variable <math>z_j</math> to indicate if a camera is installed at location <math>j</math>. <math>z_j = 1</math> if camera is installed at location <math>j</math>, while <math>z_j = 0</math> if not. \n\nOur objective is to minimize <math>\\sum_{j=1}^8 z_j</math>. For each stadium, there\u2019s a constraint that the stadium area <math>i</math> has to be covered by at least one camera location. For instance, for stadium area 1, we have <math>z_1 + z_4 + z_7 + z_8 \\geq 1</math>, while for stadium 2, we have <math>z_3 + z_4 + z_7 + z_8 \\geq 1</math>. All the 15 constraints that corresponds to 15 stadium areas are listed below:\n\n\n\nminimize <math>\\sum_{j=1}^8 z_j</math> \n\n''s.t. Constraints 1 to 15 are satisfied:''\n\n<math> z_1 + z_4 + z_7 + z_8 \\geq 1 (1)</math>\n\n<math> z_3 + z_4 + z_7 + z_8 \\geq 1 (2)</math>\n\n<math> z_1 + z_5 \\geq 1             (3)</math>\n\n<math> z_1 + z_2 + z_8 \\geq 1       (4)</math>\n\n<math> z_3 \\geq 1                 (5)</math>\n\n<math>z_1 + z_5 + z_7 + z_8 \\geq 1  (6)</math>\n\n<math>z_1 + z_2 \\geq 1             (7)</math>\n\n<math>z_2 + z_6 + z_8 \\geq 1       (8)</math>\n\n<math>z_3  \\geq 1                  (9)</math>\n\n<math>z_5 \\geq 1                    (10)</math>\n\n<math>z_3 + z_7 \\geq 1              (11)</math>\n\n<math>z_2 + z_5 + z_8 \\geq 1        (12)</math>\n\n<math>z_3  \\geq 1                   (13)</math>\n\n<math>z_4 + z_5 + z_6 \\geq 1        (14)</math>\n\n<math>z_4 + z_6 \\geq 1              (15)</math>\n\n\nFrom constraint {5,9,13}, we can obtain <math>z_3 = 1</math>. Thus we no longer need constraint 2 and 11 as they are satisfied when  <math>z_3 = 1</math>. With <math>z_3 = 1</math> determined, the constraints left are:\n\n\nminimize <math>\\sum_{j=1}^8 z_j</math>, \n\ns.t.:\n\n<math>z_1 + z_4 + z_7 + z_8 \\geq 1 (1)</math>\n\n<math>z_1 + z_5 \\geq 1            (3)</math>\n\n<math>z_1 + z_2 + z_8 \\geq 1      (4)</math>\n\n<math>z_1 + z_5 + z_7 + z_8 \\geq 1 (6)</math>\n\n<math>z_1 + z_2 \\geq 1            (7)</math>\n\n<math>z_2 + z_6 + z_8 \\geq 1       (8)</math>\n\n<math>z_5 \\geq 1                  (10)</math>\n\n<math>z_2 + z_5 + z_8 \\geq 1      (12)</math>\n\n<math>z_4 + z_5 + z_6 \\geq 1      (14)</math>\n\n<math>z_4 + z_6 \\geq 1           (15)</math>\n\n\nNow if we take a look at constraint <math>10. z_5 \\geqslant 1</math> so <math>z_5</math> shall equal to 1. As <math>z_5 = 1</math>, constraint {3,6,12,14} are satisfied no matter what other <math>z</math> values are taken. If we also take a look at constraint 7 and 4, if constraint 4 will be satisfied as long as constraint 7 is satisfied since <math>z</math> values are nonnegative, so constraint 4 is no longer needed. The remaining constraints are:\n\n\nminimize <math>\\sum_{j=1}^8 z_j</math>\n\ns.t.:\n\n<math>z_1 + z_4 + z_7 + z_8 \\geq 1 (1)</math>\n\n<math>z_1 + z_2 \\geq 1            (7)</math>\n\n<math>z_2 + z_6 + z_8 \\geq 1       (8)</math>\n\n<math>z_4 + z_6 \\geq 1           (15)</math>\n\n\nThe next step is to focus on constraint 7 and 15. We can have at least 4 combinations of <math>z_1, z_2, z_4, z_6</math>values.\n\n\n<math>A: z_1 = 1, z_2 = 0, z_4 = 1, z_6 = 0</math>\n\n<math>B: z_1 = 1, z_2 = 0, z_4 = 0, z_6 = 1</math>\n\n<math>C: z_1 = 0, z_2 = 1, z_4 = 1, z_6 = 0</math>\n\n<math>D: z_1 = 0, z_2 = 1, z_4 = 0, z_6 = 1</math>\n\n\nWe can then discuss each combination and determine <math>z_7, z_8</math>values for constraint 1 and 8 to be satisfied.\n\n\nCombination <math>A</math>: constraint 1 already satisfied, we need <math>z_8 = 1</math> to satisfy constraint 8.\n\nCombination <math>B</math>: constraint 1 already satisfied, constraint 8 already satisfied.\n\nCombination <math>C</math>: constraint 1 already satisfied, constraint 8 already satisfied.\n\nCombination <math>D</math>: we need <math>z_7 = 1</math> or <math>z_8 = 1</math> to satisfy constraint 1, while constraint 8 already satisfied.\n\nOur final step is to compare the four combinations. Since our objective is to minimize <math>\\sum_{j=1}^8 z_j</math> and combinations <math>B</math> and <math>C</math> require the least amount of <math>z_j</math> to be 1, they are the optimal solutions.\n\nTo conclude, our two solutions are:\n\n<math>Solution 1: z_1 = 1, z_3 = 1, z_5 = 1, z_6 = 1</math>\n\n<math>Solution 2: z_2 = 1, z_3 = 1, z_4 = 1, z_5 = 1</math>\n\nThe minimum number of cameras that we need to install is 4.\n\n\n\n\n'''Let's now consider solving the problem using the greedy algorithm.''' \n\nWe have a set <math>U</math> (stadium areas) that needs to be covered with <math>C</math> (camera locations). \n\n\n<math>U = \\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\\}</math>\n\n<math>C = \\{C_1,C_2,C_3,C_4,C_5,C_6,C_7,C_8\\}</math>\n\n<math>C_1 = \\{1,3,4,6,7\\} </math>\n\n<math>C_2 = \\{4,7,8,12\\}</math>\n\n<math>C_3 = \\{2,5,9,11,13\\}</math>\n\n<math>C_4 = \\{1,2,14,15\\}</math>\n\n<math>C_5 = \\{3,6,10,12,14\\}</math>\n\n<math>C_6 = \\{8,14,15\\}</math>\n\n<math>C_7 = \\{1,2,6,11\\}</math>\n\n<math>C_8 = \\{1,2,4,6,8,12\\} </math>\n\n\nThe cost of each Camera Location is the same in this case, we just hope to minimize the total number of cameras used, so we can assume the cost of each <math>C</math> to be 1.\n\nLet <math>I</math> represents set of elements included so far. Initialize <math>I</math> to be empty.\n\nFirst Iteration: \n\nThe per new element cost for <math>C_1 = 1/5</math>, for <math>C_2 = 1/4</math>, for <math>C_3 = 1/5</math>, for <math>C_4 = 1/4</math>, for <math>C_5 = 1/5</math>, for <math>C_6 = 1/3</math>, for <math>C_7 = 1/4</math>, for <math>C_8 = 1/6</math>\n\nSince <math>C_8</math> has minimum value, <math>C_8</math> is added, and <math>I</math> becomes <math>\\{1,2,4,6,8,12\\}</math>.\n\nSecond Iteration: \n\n<math>I</math> = <math>\\{1,2,4,6,8,12\\}</math>\n\nThe per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_3 = 1/4</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math>\n\nSince <math>C_3</math> has minimum value, <math>C_3</math> is added, and <math>I</math> becomes <math>\\{1,2,4,5,6,8,9,11,12,13\\}</math>.\n\nThird Iteration:\n\n<math>I</math> = <math>\\{1,2,4,5,6,8,9,11,12,13\\}</math>\n\nThe per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math>\n\nSince <math>C_5</math> has minimum value, <math>C_5</math> is added, and <math>I</math> becomes <math>\\{1,2,3,4,5,6,8,9,10,11,12,13,14\\}</math>.\n\nFourth Iteration:\n\n<math>I</math> = <math>\\{1,2,3,4,5,6,8,9,10,11,12,13,14\\}</math>\n\nThe per new element cost for <math>C_1 = 1/1</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/0</math>, for <math>C_6 = 1/1</math>, for <math>C_7 = 1/0</math>\n\nSince <math>C_1</math>, <math>C_2</math>, <math>C_6</math> all have meaningful and the same values, we can choose either both <math>C_1</math> and <math>C_6</math> or both <math>C_2</math> and <math>C_6</math>, as <math>C_1</math> or <math>C_2 </math> add <math>7</math> to <math>I</math>, and <math>C_6</math> add <math>15</math>  to <math>I</math>.\n\n<math>I</math> becomes <math>\\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\\}</math>.\n\nThe solution we obtained is: \n\nOption 1: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_1</math>\n\nOption 2: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_2</math>\n\nThe greedy algorithm does not provide the optimal solution in this case.\n\nThe usual elimination algorithm would give us the minimum number of cameras that we need to install to be4, but the greedy algorithm gives us the minimum number of cameras that we need to install is 5.\n\n== Applications==\n\nThe applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following.\n;The optimal location problem\n\nThis set covering problems is concerned with maximizing the coverage of some public facilities placed at different locations. <ref name=\"three\"> R. Church and C. ReVelle, [https://link.springer.com/article/10.1007/BF01942293 \"The maximal covering location problem],\" ''Papers of the Regional Science Association'', vol. 32, pp. 101-118, 1974. </ref> Consider the problem of placing fire stations to serve the towns of some city. <ref name=\"four\"> E. Akta\u015f, \u00d6. \u00d6zayd\u0131n, B. Bozkaya, F. \u00dclengin, and \u015e. \u00d6nsel, [https://pubsonline.informs.org/doi/10.1287/inte.1120.0671 \"Optimizing Fire Station Locations for the Istanbul Metropolitan Municipality],\" ''Interfaces'', vol. 43, pp. 240-255, 2013. </ref> If each fire station can serve its town and all adjacent towns, we can formulate a set covering problem where each subset consists of a set of adjacent towns. The problem is then solved to minimize the required number of fire stations to serve the whole city. \n\nLet <math> y_i  </math> be the decision variable corresponding to choosing to build a fire station at town <math> i </math>. Let <math> S_i </math> be a subset of towns including town <math> i </math> and all its neighbors. The problem is then formulated as follows.\n\nminimize <math>\\sum_{i=1}^n y_i</math> \n\nsuch that <math> \\sum_{i\\in S_i} y_i \\geq 1, \\forall i</math>  \n\nA real-world case study involving optimizing fire station locations in Istanbul is analyzed in this reference. <ref name=\"four\" /> The Istanbul municipality serves 790 subdistricts, which should all be covered by a fire station. Each subdistrict is considered covered if it has a neighboring district (a district at most 5 minutes away) that has a fire station. For detailed computational analysis, we refer the reader to the mentioned academic paper.\n; The optimal route selection problem\n\nConsider the problem of selecting the optimal bus routes to place pothole detectors. Due to the scarcity of the physical sensors, the problem does not allow for placing a detector on every road. The task of finding the maximum coverage using a limited number of detectors could be formulated as a set covering problem. <ref name=\"five\"> J. Ali and V. Dyo, [https://www.scitepress.org/Link.aspx?doi=10.5220/0006469800830088 \"Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach],\" ''Proceedings of the 14th International Joint Conference on E-Business and Telecommunications'', pp. 83-88, 2017. </ref> <ref name=\"eleven\"> P.H. Cruz Caminha , R. De Souza Couto , L.H. Maciel Kosmalski Costa , A. Fladenmuller , and M. Dias de Amorim, [https://www.mdpi.com/1424-8220/18/6/1976 \"On the Coverage of Bus-Based Mobile Sensing],\" ''Sensors'', 2018. </ref> Specifically, giving a collection of bus routes '''''R''''', where each route itself is divided into segments. Route <math> i </math> is denoted by <math> R_i </math>, and segment <math> j </math> is denoted by <math> S_j </math>. The segments of two different routes can overlap, and each segment is associated with a length <math> a_j </math>. The goal is then to select the routes that maximize the total covered distance.\n\nThis is quite different from other applications because it results in a maximization formulation, rather than a minimization formulation. Suppose we want to use at most <math> k </math> different routes. We want to find <math> k </math> routes that maximize the length of of covered segments. Let <math> x_i </math> be the binary decision variable corresponding to selecting route <math> R_i </math>, and let <math> y_j </math> be the decision variable associated with covering segment <math> S_j </math>. Let us also denote the set of routes that cover segment <math> j </math> by <math> C_j </math>. The problem is then formulated as follows.\n\n<math>\n\\begin{align}\n\\text{max} & ~~ \\sum_{j} a_jy_j\\\\\n\\text{s.t} & ~~ \\sum_{i\\in C_j} x_i \\geq y_j \\quad \\forall j \\\\\n& ~~ \\sum_{i} x_i = k \\\\ \n& ~~ x_i,y_{j} \\in \\{0,1\\} \\\\\n\\end{align}\n</math>\n\nThe work by Ali and Dyo explores a greedy approximation algorithm to solve an optimal selection problem including 713 bus routes in Greater London. <ref name=\"five\" /> Using 14% of the routes only (100 routes), the greedy algorithm returns a solution that covers 25% of the segments in Greater London. For a details of the approximation algorithm and the world case study, we refer the reader to this reference. <ref name=\"five\" /> For a significantly larger case study involving 5747 buses covering 5060km, we refer the reader to this academic article. <ref name=\"eleven\" />\n;The airline crew scheduling problem\n\nAn important application of large-scale set covering is the airline crew scheduling problem, which pertains to assigning airline staff to work shifts. <ref name=\"two\" /> <ref name=\"six\"> E. Marchiori and A. Steenbeek, [https://link.springer.com/chapter/10.1007/3-540-45561-2_36 \"An Evolutionary Algorithm for Large Scale Set Covering Problems with Application to Airline Crew Scheduling],\" ''Real-World Applications of Evolutionary Computing. EvoWorkshops 2000. Lecture Notes in Computer Science'', 2000. </ref> Thinking of the collection of flights as a universal set to be covered, we can formulate a set covering problem to search for the optimal assignment of employees to flights. Due to the complexity of airline schedules, this problem is usually divided into two subproblems: crew pairing and crew assignment. We refer the interested reader to this survey, which contains several problem instances with the number of flights ranging from 1013 to 7765 flights, for a detailed analysis of the formulation and algorithms that pertain to this significant application. <ref name=\"two\" /> <ref name=\"eight\"> A. Kasirzadeh, M. Saddoune, and F. Soumis [https://www.sciencedirect.com/science/article/pii/S2192437620300820?via%3Dihub \"Airline crew scheduling: models, algorithms, and data sets],\" ''EURO Journal on Transportation and Logistics'', vol. 6, pp. 111-137, 2017. </ref>\n\n==Conclusion ==\n\nThe set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications. \n\n== References ==\n<references />"
                    }
                ]
            }
        }
    }
}