Set covering problem: Difference between revisions
Sherry Liang (talk | contribs) |
No edit summary |
||
(112 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud | Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud (ChemE 6800 Fall 2020) | ||
== Introduction == | == Introduction == | ||
The set covering problem is a significant NP-hard problem in combinatorial optimization. | The set covering problem is a significant NP-hard problem in combinatorial optimization. Given a collection of elements, the set covering problem aims to find the minimum number of sets that incorporate (cover) all of these elements. <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> | ||
The set covering problem importance has two main aspects: one is pedagogical, and the other is practical. | |||
The | |||
First, because many greedy approximation methods have been proposed for this combinatorial problem, studying it gives insight into the use of approximation algorithms in solving NP-hard problems. Thus, it is a primal example in teaching computational algorithms. We present a preview of these methods in a later section, and we refer the interested reader to these references for a deeper discussion. <ref name="one" /> <ref name="seven"> P. Slavı́k, [https://www.sciencedirect.com/science/article/abs/pii/S0196677497908877 "A Tight Analysis of the Greedy Algorithm for Set Cover]," ''Journal of Algorithms,'', vol. 25, pp. 237-245, 1997. </ref> <ref name="nine"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "What Is the Best Greedy-like Heuristic for the Weighted Set Covering Problem?]," ''Operations Research Letters'', vol. 44, pp. 366-369, 2016. </ref> | |||
Second, many problems in different industries can be formulated as set covering problems. For example, scheduling machines to perform certain jobs can be thought of as covering the jobs. Picking the optimal location for a cell tower so that it covers the maximum number of customers is another set covering application. Moreover, this problem has many applications in the airline industry, and it was explored on an industrial scale as early as the 1970s. <ref name="two"> J. Rubin, [https://www.jstor.org/stable/25767684?seq=1 "A Technique for the Solution of Massive Set Covering Problems, with Application to Airline Crew Scheduling]," ''Transportation Science'', vol. 7, pp. 34-48, 1973. </ref> | |||
An integer linear program (ILP) model can be formulated for the minimum set covering problem as follows: | == Problem formulation == | ||
In the set covering problem, two sets are given: a set <math> U </math> of elements and a set <math> S </math> of subsets of the set <math> U </math>. Each subset in <math> S </math> is associated with a predetermined cost, and the union of all the subsets covers the set <math> U </math>. This combinatorial problem then concerns finding the optimal number of subsets whose union covers the universal set while minimizing the total cost.<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref> | |||
The mathematical formulation of the set covering problem is define as follows. We define <math> U </math> = { <math> u_i,..., u_m </math>} as the universe of elements and <math> S </math> = { <math> s_i,..., s_n </math>} as a collection of subsets such that <math> s_i \subset U </math> and the union of <math> s_i</math> covers all elements in <math> U </math> (i.e. <math>\cup</math><math> s_i</math> = <math> U </math> ). Addionally, each set <math> s_i</math> must cover at least one element of <math> U </math> and has associated cost <math> c_i</math> such that <math> c_i > 0</math>. The objective is to find the minimum cost sub-collection of sets <math> X </math> <math>\subset</math> <math> S </math> that covers all the elements in the universe <math> U </math>. | |||
== Integer linear program formulation == | |||
An integer linear program (ILP) model can be formulated for the minimum set covering problem as follows:<ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> | |||
'''Decision variables''' | '''Decision variables''' | ||
Line 22: | Line 29: | ||
'''Constraints ''' | '''Constraints ''' | ||
<math> \sum_{i=1}^n | <math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> | ||
<math> y_i \in \{0, 1\}, \forall i = 1,....,n</math> | <math> y_i \in \{0, 1\}, \forall i = 1,....,n</math> | ||
The objective function <math>\sum_{i=1}^n c_i y_i</math> is defined to minimize the number of subset <math> s_i</math> that cover all elements in the universe by minimizing their total cost. The first constraint implies that every element <math> i </math> in the universe <math> U </math> must be be covered and the second constraint <math> y_i \in \{0, 1\} </math> indicates that the decision variables are binary which means that every set is either in the set cover or not. | |||
Set covering problems are significant NP-hard optimization problems, which implies that as the size of the problem increases, the computational time to solve it increases exponentially. Therefore, there exist approximation algorithms that can solve large scale problems in polynomial time with optimal or near-optimal solutions. In subsequent sections, we will cover two of the most widely used approximation methods to solve set cover problem in polynomial time which are linear program relaxation methods and classical greedy algorithms. <ref name="seven" /> | |||
== Approximation via LP relaxation and rounding == | |||
Set covering is a classical integer programming problem and solving integer program in general is NP-hard. Therefore, one approach to achieve an <math> O</math>(log<math>n</math>) approximation to set covering problem in polynomial time is solving via linear programming (LP) relaxation algorithms <ref name="one"> T. Grossman and A. Wool, [https://www.sciencedirect.com/science/article/abs/pii/S0377221796001610 "Computational experience with approximation algorithms for the set covering problem]," ''European Journal of Operational Research'', vol. 101, pp. 81-92, 1997. </ref> <ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref>. In LP relaxation, we relax the integrality requirement into a linear constraints. For instance, if we replace the constraints <math> y_i \in \{0, 1\}</math> with the constraints <math> 0 \leq y_i \leq 1 </math>, we obtain the following LP problem that can be solved in polynomial time: | |||
minimize <math>\sum_{i=1}^n c_i y_i</math> | |||
subject to <math> \sum_{i=1}^n y_i \geq 1, \forall i= 1,....,m</math> | |||
<math> 0 \leq y_i\leq 1, \forall i = 1,....,n</math> | |||
The above LP formulation is a relaxation of the original ILP set cover problem. This means that every feasible solution of the integer program is also feasible for this LP program. Additionally, the value of any feasible solution for the integer program is the same value in LP since the objective functions of both integer and linear programs are the same. Solving the LP program will result in an optimal solution that is a lower bound for the original integer program since the minimization of LP finds a feasible solution of lowest possible values. Moreover, we use LP rounding algorithms to directly round the fractional LP solution to an integral combinatorial solution as follows: | |||
<br> | |||
'''Deterministic rounding algorithm''' | |||
<br> | |||
Suppose we have an optimal solution <math> z^* </math> for the linear programming relaxation of the set cover problem. We round the fractional solution <math> z^* </math> to an integer solution <math> z </math> using LP rounding algorithm. In general, there are two approaches for rounding algorithms, deterministic and randomized rounding algorithm. In this section, we will explain the deterministic algorithms. In this approach, we include subset <math> s_i </math> in our solution if <math> z^* \geq 1/d </math>, where <math> d </math> is the maximum number of sets in which any element appears. In practice, we set <math> z </math> to be as follows:<ref name="twelve"> Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [https://www.designofapproxalgs.com/book.pdf]. “Cambridge University Press”, 2011. </ref> | |||
<math> z = \begin{cases} 1, & \text{if } z^*\geq 1/d \\ 0, & \text{otherwise } \end{cases}</math> | |||
The rounding algorithm is an approximation algorithm for the set cover problem. It is clear that the algorithm converge in polynomial time and <math> z </math> is a feasible solution to the integer program. | |||
== Greedy approximation algorithm == | |||
Greedy algorithms can be used to approximate for optimal or near-optimal solutions for large scale set covering instances in polynomial solvable time. <ref name="seven" /> <ref name="nine" /> The greedy heuristics applies iterative process that, at each stage, select the largest number of uncovered elements in the universe <math> U </math>, and delete the uncovered elements, until all elements are covered. <ref name="ten"> V. Chvatal, [https://pubsonline.informs.org/doi/abs/10.1287/moor.4.3.233 "Greedy Heuristic for the Set-Covering Problem]," ''Mathematics of Operations Research'', vol. 4, pp. 233-235, 1979. </ref> Let <math> T </math> be the set that contain the covered elements, and <math> U </math> be the set that contain the elements of <math> Y </math> that still uncovered. At the beginning of the iteration, <math> T </math> is empty and all elements <math> Y \in U </math>. We iteratively select the set of <math> S </math> that covers the largest number of elements in <math> U </math> and add it to the covered elements in <math> T </math>. An example of this algorithm is presented below. | |||
'''Greedy algorithm for minimum set cover example: ''' | |||
Step 0: <math> \quad </math> <math> T \in \Phi </math> <math> \quad \quad \quad \quad \quad </math> { <math> T </math> stores the covered elements } | |||
Step 1: <math> \quad </math> '''While''' <math> U \neq \Phi </math> '''do:''' <math> \quad </math> { <math> U </math> stores the uncovered elements <math> Y </math>} | |||
Step 2: <math> \quad \quad \quad </math> select <math> s_i \in S </math> that covers the highest number of elements in <math> U </math> | |||
Step 3: <math> \quad \quad \quad </math> add <math> s_i </math> to <math> T </math> | |||
Step 4: <math> \quad \quad \quad </math> remove <math> s_i </math> from <math> U </math> | |||
Step 5: <math> \quad </math> '''End while''' | |||
Step 6: <math> \quad </math> '''Return''' <math> S </math> | |||
==Numerical Example== | ==Numerical Example== | ||
Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8. | Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8. | ||
We are given that camera location 1 covers stadium areas 1,3,4,6,7, camera location 2 covers stadium areas 4,7,8,12, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below: | We are given that camera location 1 covers stadium areas {1,3,4,6,7}, camera location 2 covers stadium areas {4,7,8,12}, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below: | ||
{| class="wikitable" | {| class="wikitable" | ||
|+Table 1 | |+Table 1 Camera Location vs Stadium Area | ||
|- | |- | ||
!camera Location | |||
|1 | |1 | ||
|2 | |||
|3 | |||
|4 | |||
|5 | |||
|6 | |||
|7 | |||
|8 | |||
|- | |||
!stadium area | |||
|1,3,4,6,7 | |1,3,4,6,7 | ||
|4,7,8,12 | |4,7,8,12 | ||
|2,5,9,11,13 | |2,5,9,11,13 | ||
|1,2,14,15 | |1,2,14,15 | ||
|3,6,10,12,14 | |3,6,10,12,14 | ||
|8,14,15 | |8,14,15 | ||
|1,2,6,11 | |1,2,6,11 | ||
|1,2,4,6,8,12 | |1,2,4,6,8,12 | ||
|} | |} | ||
We can then represent the above information using binary values. If the stadium area <math>i</math> can be covered with camera location <math>j</math>, then we have <math>y_{ij} = 1</math>. If not,<math>y_{ij} = 0</math>. For instance, stadium area 1 is covered by camera location 1, so <math>y_{11} = 1</math>, while stadium area 1 is not covered by camera location 2, so <math>y_{12} = 0</math>. The binary variables <math>y_{ij}</math> values are given in the table below: | |||
We can then represent the above information using binary values. If the stadium area i can be covered with camera location j, then we have | |||
{| class="wikitable" | {| class="wikitable" | ||
|+Table | |+Table 2 Binary Table (All Camera Locations and Stadium Areas) | ||
! | ! | ||
! | !Camera1 | ||
! | !Camera2 | ||
! | !Camera3 | ||
! | !Camera4 | ||
! | !Camera5 | ||
! | !Camera6 | ||
! | !Camera7 | ||
! | !Camera8 | ||
|- | |- | ||
!Stadium1 | |||
|1 | |1 | ||
| | | | ||
Line 136: | Line 129: | ||
|1 | |1 | ||
|- | |- | ||
!Stadium2 | |||
| | | | ||
| | | | ||
Line 146: | Line 139: | ||
|1 | |1 | ||
|- | |- | ||
!Stadium3 | |||
|1 | |1 | ||
| | | | ||
Line 156: | Line 149: | ||
| | | | ||
|- | |- | ||
!Stadium4 | |||
|1 | |1 | ||
|1 | |1 | ||
Line 166: | Line 159: | ||
|1 | |1 | ||
|- | |- | ||
!Stadium5 | |||
| | | | ||
| | | | ||
Line 176: | Line 169: | ||
| | | | ||
|- | |- | ||
!Stadium6 | |||
|1 | |1 | ||
| | | | ||
Line 186: | Line 179: | ||
|1 | |1 | ||
|- | |- | ||
!Stadium7 | |||
|1 | |1 | ||
|1 | |1 | ||
Line 196: | Line 189: | ||
| | | | ||
|- | |- | ||
!Stadium8 | |||
| | | | ||
|1 | |1 | ||
Line 206: | Line 199: | ||
|1 | |1 | ||
|- | |- | ||
!Stadium9 | |||
| | | | ||
| | | | ||
Line 216: | Line 209: | ||
| | | | ||
|- | |- | ||
!Stadium10 | |||
| | | | ||
| | | | ||
Line 226: | Line 219: | ||
| | | | ||
|- | |- | ||
!Stadium11 | |||
| | | | ||
| | | | ||
Line 236: | Line 229: | ||
| | | | ||
|- | |- | ||
!Stadium12 | |||
| | | | ||
|1 | |1 | ||
Line 246: | Line 239: | ||
|1 | |1 | ||
|- | |- | ||
!Stadium13 | |||
| | | | ||
| | | | ||
Line 256: | Line 249: | ||
| | | | ||
|- | |- | ||
!Stadium14 | |||
| | | | ||
| | | | ||
Line 266: | Line 259: | ||
| | | | ||
|- | |- | ||
!Stadium15 | |||
| | | | ||
| | | | ||
Line 278: | Line 271: | ||
Our objective is to minimize | We introduce another binary variable <math>z_j</math> to indicate if a camera is installed at location <math>j</math>. <math>z_j = 1</math> if camera is installed at location <math>j</math>, while <math>z_j = 0</math> if not. | ||
Our objective is to minimize <math>\sum_{j=1}^8 z_j</math>. For each stadium, there’s a constraint that the stadium area <math>i</math> has to be covered by at least one camera location. For instance, for stadium area 1, we have <math>z_1 + z_4 + z_7 + z_8 \geq 1</math>, while for stadium 2, we have <math>z_3 + z_4 + z_7 + z_8 \geq 1</math>. All the 15 constraints that corresponds to 15 stadium areas are listed below: | |||
minimize <math>\sum_{j=1}^8 z_j</math> | |||
''s.t. Constraints 1 to 15 are satisfied:'' | |||
<math> z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math> | |||
<math> z_3 + z_4 + z_7 + z_8 \geq 1 (2)</math> | |||
<math> z_1 + z_5 \geq 1 (3)</math> | |||
<math> z_1 + z_2 + z_8 \geq 1 (4)</math> | |||
<math> z_3 \geq 1 (5)</math> | |||
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math> | |||
<math>z_1 + z_2 \geq 1 (7)</math> | |||
<math>z_2 + z_6 + z_8 \geq 1 (8)</math> | |||
<math>z_3 \geq 1 (9)</math> | |||
<math>z_5 \geq 1 (10)</math> | |||
<math>z_3 + z_7 \geq 1 (11)</math> | |||
<math>z_2 + z_5 + z_8 \geq 1 (12)</math> | |||
<math>z_3 \geq 1 (13)</math> | |||
<math>z_4 + z_5 + z_6 \geq 1 (14)</math> | |||
<math>z_4 + z_6 \geq 1 (15)</math> | |||
From constraint {5,9,13}, we can obtain <math>z_3 = 1</math>. Thus we no longer need constraint 2 and 11 as they are satisfied when <math>z_3 = 1</math>. With <math>z_3 = 1</math> determined, the constraints left are: | |||
minimize <math>\sum_{j=1}^8 z_j</math>, | |||
s.t.: | |||
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math> | |||
<math>z_1 + z_5 \geq 1 (3)</math> | |||
<math>z_1 + z_2 + z_8 \geq 1 (4)</math> | |||
<math>z_1 + z_5 + z_7 + z_8 \geq 1 (6)</math> | |||
<math>z_1 + z_2 \geq 1 (7)</math> | |||
<math>z_2 + z_6 + z_8 \geq 1 (8)</math> | |||
<math>z_5 \geq 1 (10)</math> | |||
<math>z_2 + z_5 + z_8 \geq 1 (12)</math> | |||
<math>z_4 + z_5 + z_6 \geq 1 (14)</math> | |||
<math>z_4 + z_6 \geq 1 (15)</math> | |||
Now if we take a look at constraint <math>10. z_5 \geqslant 1</math> so <math>z_5</math> shall equal to 1. As <math>z_5 = 1</math>, constraint {3,6,12,14} are satisfied no matter what other <math>z</math> values are taken. If we also take a look at constraint 7 and 4, if constraint 4 will be satisfied as long as constraint 7 is satisfied since <math>z</math> values are nonnegative, so constraint 4 is no longer needed. The remaining constraints are: | |||
minimize <math>\sum_{j=1}^8 z_j</math> | |||
s.t.: | |||
<math>z_1 + z_4 + z_7 + z_8 \geq 1 (1)</math> | |||
<math>z_1 + z_2 \geq 1 (7)</math> | |||
<math>z_2 + z_6 + z_8 \geq 1 (8)</math> | |||
<math>z_4 + z_6 \geq 1 (15)</math> | |||
The next step is to focus on constraint 7 and 15. We can have at least 4 combinations of <math>z_1, z_2, z_4, z_6</math>values. | |||
<math>A: z_1 = 1, z_2 = 0, z_4 = 1, z_6 = 0</math> | |||
<math>B: z_1 = 1, z_2 = 0, z_4 = 0, z_6 = 1</math> | |||
<math>C: z_1 = 0, z_2 = 1, z_4 = 1, z_6 = 0</math> | |||
<math>D: z_1 = 0, z_2 = 1, z_4 = 0, z_6 = 1</math> | |||
We can then discuss each combination and determine <math>z_7, z_8</math>values for constraint 1 and 8 to be satisfied. | |||
Combination <math>A</math>: constraint 1 already satisfied, we need <math>z_8 = 1</math> to satisfy constraint 8. | |||
Combination <math>B</math>: constraint 1 already satisfied, constraint 8 already satisfied. | |||
Combination <math>C</math>: constraint 1 already satisfied, constraint 8 already satisfied. | |||
Combination <math>D</math>: we need <math>z_7 = 1</math> or <math>z_8 = 1</math> to satisfy constraint 1, while constraint 8 already satisfied. | |||
Our final step is to compare the four combinations. Since our objective is to minimize | Our final step is to compare the four combinations. Since our objective is to minimize <math>\sum_{j=1}^8 z_j</math> and combinations <math>B</math> and <math>C</math> require the least amount of <math>z_j</math> to be 1, they are the optimal solutions. | ||
To conclude, our two solutions are: | To conclude, our two solutions are: | ||
<math>Solution 1: z_1 = 1, z_3 = 1, z_5 = 1, z_6 = 1</math> | |||
<math>Solution 2: z_2 = 1, z_3 = 1, z_4 = 1, z_5 = 1</math> | |||
The minimum number of cameras that we need to install is 4. | The minimum number of cameras that we need to install is 4. | ||
'''Let's now consider solving the problem using the greedy algorithm.''' | |||
We have a set <math>U</math> (stadium areas) that needs to be covered with <math>C</math> (camera locations). | |||
<math>U = \{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math> | |||
<math>C = \{C_1,C_2,C_3,C_4,C_5,C_6,C_7,C_8\}</math> | |||
<math>C_1 = \{1,3,4,6,7\} </math> | |||
<math>C_2 = \{4,7,8,12\}</math> | |||
<math>C_3 = \{2,5,9,11,13\}</math> | |||
<math>C_4 = \{1,2,14,15\}</math> | |||
<math>C_5 = \{3,6,10,12,14\}</math> | |||
<math>C_6 = \{8,14,15\}</math> | |||
<math>C_7 = \{1,2,6,11\}</math> | |||
<math>C_8 = \{1,2,4,6,8,12\} </math> | |||
The cost of each Camera Location is the same in this case, we just hope to minimize the total number of cameras used, so we can assume the cost of each <math>C</math> to be 1. | |||
Let <math>I</math> represents set of elements included so far. Initialize <math>I</math> to be empty. | |||
First Iteration: | |||
The per new element cost for <math>C_1 = 1/5</math>, for <math>C_2 = 1/4</math>, for <math>C_3 = 1/5</math>, for <math>C_4 = 1/4</math>, for <math>C_5 = 1/5</math>, for <math>C_6 = 1/3</math>, for <math>C_7 = 1/4</math>, for <math>C_8 = 1/6</math> | |||
Since <math>C_8</math> has minimum value, <math>C_8</math> is added, and <math>I</math> becomes <math>\{1,2,4,6,8,12\}</math>. | |||
Second Iteration: | |||
<math>I</math> = <math>\{1,2,4,6,8,12\}</math> | |||
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_3 = 1/4</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math> | |||
Since <math>C_3</math> has minimum value, <math>C_3</math> is added, and <math>I</math> becomes <math>\{1,2,4,5,6,8,9,11,12,13\}</math>. | |||
Third Iteration: | |||
<math>I</math> = <math>\{1,2,4,5,6,8,9,11,12,13\}</math> | |||
The per new element cost for <math>C_1 = 1/2</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/2</math>, for <math>C_5 = 1/3</math>, for <math>C_6 = 1/2</math>, for <math>C_7 = 1/1</math> | |||
Since <math>C_5</math> has minimum value, <math>C_5</math> is added, and <math>I</math> becomes <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math>. | |||
Fourth Iteration: | |||
<math>I</math> = <math>\{1,2,3,4,5,6,8,9,10,11,12,13,14\}</math> | |||
The per new element cost for <math>C_1 = 1/1</math>, for <math>C_2 = 1/1</math>, for <math>C_4 = 1/0</math>, for <math>C_6 = 1/1</math>, for <math>C_7 = 1/0</math> | |||
Since <math>C_1</math>, <math>C_2</math>, <math>C_6</math> all have meaningful and the same values, we can choose either both <math>C_1</math> and <math>C_6</math> or both <math>C_2</math> and <math>C_6</math>, as <math>C_1</math> or <math>C_2 </math> add <math>7</math> to <math>I</math>, and <math>C_6</math> add <math>15</math> to <math>I</math>. | |||
<math>I</math> becomes <math>\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}</math>. | |||
The solution we obtained is: | |||
Option 1: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_1</math> | |||
Option 2: <math>C_8</math> + <math>C_3</math> + <math>C_5</math> + <math>C_6</math> + <math>C_2</math> | |||
The greedy algorithm does not provide the optimal solution in this case. | |||
The usual elimination algorithm would give us the minimum number of cameras that we need to install to be4, but the greedy algorithm gives us the minimum number of cameras that we need to install is 5. | |||
== Applications== | == Applications== | ||
Line 381: | Line 469: | ||
The applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following. | The applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following. | ||
;The optimal location problem | ;The optimal location problem | ||
This set covering problems is concerned with maximizing the coverage of some public facilities placed at different locations. <ref name="three"> R. Church and C. ReVelle, [https://link.springer.com/article/10.1007/BF01942293 "The maximal covering location problem]," ''Papers of the Regional Science Association'', vol. 32, pp. 101-118, 1974. </ref> Consider the problem of placing fire stations to serve the towns of some city. <ref name="four"> E. Aktaş, Ö. Özaydın, B. Bozkaya, F. Ülengin, and Ş. Önsel, [https://pubsonline.informs.org/doi/10.1287/inte.1120.0671 "Optimizing Fire Station Locations for the Istanbul Metropolitan Municipality]," ''Interfaces'', vol. 43, pp. 240-255, 2013. </ref> If each fire station can serve its town and all adjacent towns, we can formulate a set covering problem where each subset consists of a set of adjacent towns. The problem is then solved to minimize the required number of fire stations to serve the whole city. | |||
Let <math> y_i </math> be the decision variable corresponding to choosing to build a fire station at town <math> i </math>. Let <math> S_i </math> be a subset of towns including town <math> i </math> and all its neighbors. The problem is then formulated as follows. | |||
minimize <math>\sum_{i=1}^n y_i</math> | |||
such that <math> \sum_{i\in S_i} y_i \geq 1, \forall i</math> | |||
A real-world case study involving optimizing fire station locations in Istanbul is analyzed in this reference. <ref name="four" /> The Istanbul municipality serves 790 subdistricts, which should all be covered by a fire station. Each subdistrict is considered covered if it has a neighboring district (a district at most 5 minutes away) that has a fire station. For detailed computational analysis, we refer the reader to the mentioned academic paper. | |||
; The optimal route selection problem | ; The optimal route selection problem | ||
Consider the problem of selecting the optimal bus routes to place pothole detectors. Due to the scarcity of the physical sensors, the problem does not allow for placing a detector on every road. The task of finding the maximum coverage using a limited number of detectors could be formulated as a set covering problem. <ref name="five"> J. Ali and V. Dyo, [https://www.scitepress.org/Link.aspx?doi=10.5220/0006469800830088 "Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach]," ''Proceedings of the 14th International Joint Conference on E-Business and Telecommunications'', pp. 83-88, 2017. </ref> <ref name="eleven"> P.H. Cruz Caminha , R. De Souza Couto , L.H. Maciel Kosmalski Costa , A. Fladenmuller , and M. Dias de Amorim, [https://www.mdpi.com/1424-8220/18/6/1976 "On the Coverage of Bus-Based Mobile Sensing]," ''Sensors'', 2018. </ref> Specifically, giving a collection of bus routes '''''R''''', where each route itself is divided into segments. Route <math> i </math> is denoted by <math> R_i </math>, and segment <math> j </math> is denoted by <math> S_j </math>. The segments of two different routes can overlap, and each segment is associated with a length <math> a_j </math>. The goal is then to select the routes that maximize the total covered distance. | |||
This is quite different from other applications because it results in a maximization formulation, rather than a minimization formulation. Suppose we want to use at most <math> k </math> different routes. We want to find <math> k </math> routes that maximize the length of of covered segments. Let <math> x_i </math> be the binary decision variable corresponding to selecting route <math> R_i </math>, and let <math> y_j </math> be the decision variable associated with covering segment <math> S_j </math>. Let us also denote the set of routes that cover segment <math> j </math> by <math> C_j </math>. The problem is then formulated as follows. | |||
<math> | |||
\begin{align} | |||
\text{max} & ~~ \sum_{j} a_jy_j\\ | |||
\text{s.t} & ~~ \sum_{i\in C_j} x_i \geq y_j \quad \forall j \\ | |||
& ~~ \sum_{i} x_i = k \\ | |||
& ~~ x_i,y_{j} \in \{0,1\} \\ | |||
\end{align} | |||
</math> | |||
The work by Ali and Dyo explores a greedy approximation algorithm to solve an optimal selection problem including 713 bus routes in Greater London. <ref name="five" /> Using 14% of the routes only (100 routes), the greedy algorithm returns a solution that covers 25% of the segments in Greater London. For a details of the approximation algorithm and the world case study, we refer the reader to this reference. <ref name="five" /> For a significantly larger case study involving 5747 buses covering 5060km, we refer the reader to this academic article. <ref name="eleven" /> | |||
;The airline crew scheduling problem | ;The airline crew scheduling problem | ||
An important application of large-scale set covering is the airline crew scheduling problem, which pertains to assigning airline staff to work shifts. <ref name="two" /> <ref name="six"> E. Marchiori and A. Steenbeek, [https://link.springer.com/chapter/10.1007/3-540-45561-2_36 "An Evolutionary Algorithm for Large Scale Set Covering Problems with Application to Airline Crew Scheduling]," ''Real-World Applications of Evolutionary Computing. EvoWorkshops 2000. Lecture Notes in Computer Science'', 2000. </ref> Thinking of the collection of flights as a universal set to be covered, we can formulate a set covering problem to search for the optimal assignment of employees to flights. Due to the complexity of airline schedules, this problem is usually divided into two subproblems: crew pairing and crew assignment. We refer the interested reader to this survey, which contains several problem instances with the number of flights ranging from 1013 to 7765 flights, for a detailed analysis of the formulation and algorithms that pertain to this significant application. <ref name="two" /> <ref name="eight"> A. Kasirzadeh, M. Saddoune, and F. Soumis [https://www.sciencedirect.com/science/article/pii/S2192437620300820?via%3Dihub "Airline crew scheduling: models, algorithms, and data sets]," ''EURO Journal on Transportation and Logistics'', vol. 6, pp. 111-137, 2017. </ref> | |||
==Conclusion == | ==Conclusion == | ||
The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications. | The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications. | ||
==References== | == References == | ||
<references /> | |||
Latest revision as of 06:36, 21 December 2020
Authors: Sherry Liang, Khalid Alanazi, Kumail Al Hamoud (ChemE 6800 Fall 2020)
Introduction
The set covering problem is a significant NP-hard problem in combinatorial optimization. Given a collection of elements, the set covering problem aims to find the minimum number of sets that incorporate (cover) all of these elements. [1]
The set covering problem importance has two main aspects: one is pedagogical, and the other is practical.
First, because many greedy approximation methods have been proposed for this combinatorial problem, studying it gives insight into the use of approximation algorithms in solving NP-hard problems. Thus, it is a primal example in teaching computational algorithms. We present a preview of these methods in a later section, and we refer the interested reader to these references for a deeper discussion. [1] [2] [3]
Second, many problems in different industries can be formulated as set covering problems. For example, scheduling machines to perform certain jobs can be thought of as covering the jobs. Picking the optimal location for a cell tower so that it covers the maximum number of customers is another set covering application. Moreover, this problem has many applications in the airline industry, and it was explored on an industrial scale as early as the 1970s. [4]
Problem formulation
In the set covering problem, two sets are given: a set of elements and a set of subsets of the set . Each subset in is associated with a predetermined cost, and the union of all the subsets covers the set . This combinatorial problem then concerns finding the optimal number of subsets whose union covers the universal set while minimizing the total cost.[1] [5]
The mathematical formulation of the set covering problem is define as follows. We define = { } as the universe of elements and = { } as a collection of subsets such that and the union of covers all elements in (i.e. = ). Addionally, each set must cover at least one element of and has associated cost such that . The objective is to find the minimum cost sub-collection of sets that covers all the elements in the universe .
Integer linear program formulation
An integer linear program (ILP) model can be formulated for the minimum set covering problem as follows:[1]
Decision variables
Objective function
minimize
Constraints
The objective function is defined to minimize the number of subset that cover all elements in the universe by minimizing their total cost. The first constraint implies that every element in the universe must be be covered and the second constraint indicates that the decision variables are binary which means that every set is either in the set cover or not.
Set covering problems are significant NP-hard optimization problems, which implies that as the size of the problem increases, the computational time to solve it increases exponentially. Therefore, there exist approximation algorithms that can solve large scale problems in polynomial time with optimal or near-optimal solutions. In subsequent sections, we will cover two of the most widely used approximation methods to solve set cover problem in polynomial time which are linear program relaxation methods and classical greedy algorithms. [2]
Approximation via LP relaxation and rounding
Set covering is a classical integer programming problem and solving integer program in general is NP-hard. Therefore, one approach to achieve an (log) approximation to set covering problem in polynomial time is solving via linear programming (LP) relaxation algorithms [1] [5]. In LP relaxation, we relax the integrality requirement into a linear constraints. For instance, if we replace the constraints with the constraints , we obtain the following LP problem that can be solved in polynomial time:
minimize
subject to
The above LP formulation is a relaxation of the original ILP set cover problem. This means that every feasible solution of the integer program is also feasible for this LP program. Additionally, the value of any feasible solution for the integer program is the same value in LP since the objective functions of both integer and linear programs are the same. Solving the LP program will result in an optimal solution that is a lower bound for the original integer program since the minimization of LP finds a feasible solution of lowest possible values. Moreover, we use LP rounding algorithms to directly round the fractional LP solution to an integral combinatorial solution as follows:
Deterministic rounding algorithm
Suppose we have an optimal solution for the linear programming relaxation of the set cover problem. We round the fractional solution to an integer solution using LP rounding algorithm. In general, there are two approaches for rounding algorithms, deterministic and randomized rounding algorithm. In this section, we will explain the deterministic algorithms. In this approach, we include subset in our solution if , where is the maximum number of sets in which any element appears. In practice, we set to be as follows:[5]
The rounding algorithm is an approximation algorithm for the set cover problem. It is clear that the algorithm converge in polynomial time and is a feasible solution to the integer program.
Greedy approximation algorithm
Greedy algorithms can be used to approximate for optimal or near-optimal solutions for large scale set covering instances in polynomial solvable time. [2] [3] The greedy heuristics applies iterative process that, at each stage, select the largest number of uncovered elements in the universe , and delete the uncovered elements, until all elements are covered. [6] Let be the set that contain the covered elements, and be the set that contain the elements of that still uncovered. At the beginning of the iteration, is empty and all elements . We iteratively select the set of that covers the largest number of elements in and add it to the covered elements in . An example of this algorithm is presented below.
Greedy algorithm for minimum set cover example:
Step 0: { stores the covered elements }
Step 1: While do: { stores the uncovered elements }
Step 2: select that covers the highest number of elements in
Step 3: add to
Step 4: remove from
Step 5: End while
Step 6: Return
Numerical Example
Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from 1 to 15, and possible camera locations from 1 to 8.
We are given that camera location 1 covers stadium areas {1,3,4,6,7}, camera location 2 covers stadium areas {4,7,8,12}, while the remaining camera locations and the stadium areas that the cameras can cover are given in table 1 below:
camera Location | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|
stadium area | 1,3,4,6,7 | 4,7,8,12 | 2,5,9,11,13 | 1,2,14,15 | 3,6,10,12,14 | 8,14,15 | 1,2,6,11 | 1,2,4,6,8,12 |
We can then represent the above information using binary values. If the stadium area can be covered with camera location , then we have . If not,. For instance, stadium area 1 is covered by camera location 1, so , while stadium area 1 is not covered by camera location 2, so . The binary variables values are given in the table below:
Camera1 | Camera2 | Camera3 | Camera4 | Camera5 | Camera6 | Camera7 | Camera8 | |
---|---|---|---|---|---|---|---|---|
Stadium1 | 1 | 1 | 1 | 1 | ||||
Stadium2 | 1 | 1 | 1 | 1 | ||||
Stadium3 | 1 | 1 | ||||||
Stadium4 | 1 | 1 | 1 | |||||
Stadium5 | 1 | |||||||
Stadium6 | 1 | 1 | 1 | 1 | ||||
Stadium7 | 1 | 1 | ||||||
Stadium8 | 1 | 1 | 1 | |||||
Stadium9 | 1 | |||||||
Stadium10 | 1 | |||||||
Stadium11 | 1 | 1 | ||||||
Stadium12 | 1 | 1 | 1 | |||||
Stadium13 | 1 | |||||||
Stadium14 | 1 | 1 | 1 | |||||
Stadium15 | 1 | 1 |
We introduce another binary variable to indicate if a camera is installed at location . if camera is installed at location , while if not.
Our objective is to minimize . For each stadium, there’s a constraint that the stadium area has to be covered by at least one camera location. For instance, for stadium area 1, we have , while for stadium 2, we have . All the 15 constraints that corresponds to 15 stadium areas are listed below:
minimize
s.t. Constraints 1 to 15 are satisfied:
From constraint {5,9,13}, we can obtain . Thus we no longer need constraint 2 and 11 as they are satisfied when . With determined, the constraints left are:
minimize ,
s.t.:
Now if we take a look at constraint so shall equal to 1. As , constraint {3,6,12,14} are satisfied no matter what other values are taken. If we also take a look at constraint 7 and 4, if constraint 4 will be satisfied as long as constraint 7 is satisfied since values are nonnegative, so constraint 4 is no longer needed. The remaining constraints are:
minimize
s.t.:
The next step is to focus on constraint 7 and 15. We can have at least 4 combinations of values.
We can then discuss each combination and determine values for constraint 1 and 8 to be satisfied.
Combination : constraint 1 already satisfied, we need to satisfy constraint 8.
Combination : constraint 1 already satisfied, constraint 8 already satisfied.
Combination : constraint 1 already satisfied, constraint 8 already satisfied.
Combination : we need or to satisfy constraint 1, while constraint 8 already satisfied.
Our final step is to compare the four combinations. Since our objective is to minimize and combinations and require the least amount of to be 1, they are the optimal solutions.
To conclude, our two solutions are:
The minimum number of cameras that we need to install is 4.
Let's now consider solving the problem using the greedy algorithm.
We have a set (stadium areas) that needs to be covered with (camera locations).
The cost of each Camera Location is the same in this case, we just hope to minimize the total number of cameras used, so we can assume the cost of each to be 1.
Let represents set of elements included so far. Initialize to be empty.
First Iteration:
The per new element cost for , for , for , for , for , for , for , for
Since has minimum value, is added, and becomes .
Second Iteration:
=
The per new element cost for , for , for , for , for , for , for
Since has minimum value, is added, and becomes .
Third Iteration:
=
The per new element cost for , for , for , for , for , for
Since has minimum value, is added, and becomes .
Fourth Iteration:
=
The per new element cost for , for , for , for , for
Since , , all have meaningful and the same values, we can choose either both and or both and , as or add to , and add to .
becomes .
The solution we obtained is:
Option 1: + + + +
Option 2: + + + +
The greedy algorithm does not provide the optimal solution in this case.
The usual elimination algorithm would give us the minimum number of cameras that we need to install to be4, but the greedy algorithm gives us the minimum number of cameras that we need to install is 5.
Applications
The applications of the set covering problem span a wide range of applications, but its usefulness is evident in industrial and governmental planning. Variations of the set covering problem that are of practical significance include the following.
- The optimal location problem
This set covering problems is concerned with maximizing the coverage of some public facilities placed at different locations. [7] Consider the problem of placing fire stations to serve the towns of some city. [8] If each fire station can serve its town and all adjacent towns, we can formulate a set covering problem where each subset consists of a set of adjacent towns. The problem is then solved to minimize the required number of fire stations to serve the whole city.
Let be the decision variable corresponding to choosing to build a fire station at town . Let be a subset of towns including town and all its neighbors. The problem is then formulated as follows.
minimize
such that
A real-world case study involving optimizing fire station locations in Istanbul is analyzed in this reference. [8] The Istanbul municipality serves 790 subdistricts, which should all be covered by a fire station. Each subdistrict is considered covered if it has a neighboring district (a district at most 5 minutes away) that has a fire station. For detailed computational analysis, we refer the reader to the mentioned academic paper.
- The optimal route selection problem
Consider the problem of selecting the optimal bus routes to place pothole detectors. Due to the scarcity of the physical sensors, the problem does not allow for placing a detector on every road. The task of finding the maximum coverage using a limited number of detectors could be formulated as a set covering problem. [9] [10] Specifically, giving a collection of bus routes R, where each route itself is divided into segments. Route is denoted by , and segment is denoted by . The segments of two different routes can overlap, and each segment is associated with a length . The goal is then to select the routes that maximize the total covered distance.
This is quite different from other applications because it results in a maximization formulation, rather than a minimization formulation. Suppose we want to use at most different routes. We want to find routes that maximize the length of of covered segments. Let be the binary decision variable corresponding to selecting route , and let be the decision variable associated with covering segment . Let us also denote the set of routes that cover segment by . The problem is then formulated as follows.
The work by Ali and Dyo explores a greedy approximation algorithm to solve an optimal selection problem including 713 bus routes in Greater London. [9] Using 14% of the routes only (100 routes), the greedy algorithm returns a solution that covers 25% of the segments in Greater London. For a details of the approximation algorithm and the world case study, we refer the reader to this reference. [9] For a significantly larger case study involving 5747 buses covering 5060km, we refer the reader to this academic article. [10]
- The airline crew scheduling problem
An important application of large-scale set covering is the airline crew scheduling problem, which pertains to assigning airline staff to work shifts. [4] [11] Thinking of the collection of flights as a universal set to be covered, we can formulate a set covering problem to search for the optimal assignment of employees to flights. Due to the complexity of airline schedules, this problem is usually divided into two subproblems: crew pairing and crew assignment. We refer the interested reader to this survey, which contains several problem instances with the number of flights ranging from 1013 to 7765 flights, for a detailed analysis of the formulation and algorithms that pertain to this significant application. [4] [12]
Conclusion
The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial problem. Due to its applicability to route planning and airline crew scheduling, several methods have been proposed to solve it. Its straightforward formulation allows for the use of off-the-shelf optimizers to solve it. Moreover, heuristic techniques and greedy algorithms can be used to solve large-scale set covering problems for industrial applications.
References
- ↑ 1.0 1.1 1.2 1.3 1.4 T. Grossman and A. Wool, "Computational experience with approximation algorithms for the set covering problem," European Journal of Operational Research, vol. 101, pp. 81-92, 1997.
- ↑ 2.0 2.1 2.2 P. Slavı́k, "A Tight Analysis of the Greedy Algorithm for Set Cover," Journal of Algorithms,, vol. 25, pp. 237-245, 1997.
- ↑ 3.0 3.1 T. Grossman and A. Wool, "What Is the Best Greedy-like Heuristic for the Weighted Set Covering Problem?," Operations Research Letters, vol. 44, pp. 366-369, 2016.
- ↑ 4.0 4.1 4.2 J. Rubin, "A Technique for the Solution of Massive Set Covering Problems, with Application to Airline Crew Scheduling," Transportation Science, vol. 7, pp. 34-48, 1973.
- ↑ 5.0 5.1 5.2 Williamson, David P., and David B. Shmoys. “The Design of Approximation Algorithms” [1]. “Cambridge University Press”, 2011.
- ↑ V. Chvatal, "Greedy Heuristic for the Set-Covering Problem," Mathematics of Operations Research, vol. 4, pp. 233-235, 1979.
- ↑ R. Church and C. ReVelle, "The maximal covering location problem," Papers of the Regional Science Association, vol. 32, pp. 101-118, 1974.
- ↑ 8.0 8.1 E. Aktaş, Ö. Özaydın, B. Bozkaya, F. Ülengin, and Ş. Önsel, "Optimizing Fire Station Locations for the Istanbul Metropolitan Municipality," Interfaces, vol. 43, pp. 240-255, 2013.
- ↑ 9.0 9.1 9.2 J. Ali and V. Dyo, "Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach," Proceedings of the 14th International Joint Conference on E-Business and Telecommunications, pp. 83-88, 2017.
- ↑ 10.0 10.1 P.H. Cruz Caminha , R. De Souza Couto , L.H. Maciel Kosmalski Costa , A. Fladenmuller , and M. Dias de Amorim, "On the Coverage of Bus-Based Mobile Sensing," Sensors, 2018.
- ↑ E. Marchiori and A. Steenbeek, "An Evolutionary Algorithm for Large Scale Set Covering Problems with Application to Airline Crew Scheduling," Real-World Applications of Evolutionary Computing. EvoWorkshops 2000. Lecture Notes in Computer Science, 2000.
- ↑ A. Kasirzadeh, M. Saddoune, and F. Soumis "Airline crew scheduling: models, algorithms, and data sets," EURO Journal on Transportation and Logistics, vol. 6, pp. 111-137, 2017.