Heuristic algorithms
Author: Zemin Mi (zm287), Boyu Yang (by274) (ChemE 6800 Fall 2024)
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
Introduction
Heuristic algorithms are strategies designed to efficiently tackle complex optimization problems by providing approximate solutions when exact methods are impractical. This approach is particularly beneficial in scenarios where traditional algorithms may be computationally prohibitive.[1] Heuristics are widely used because they excel in handling uncertainty, incomplete information, and large-scale optimization tasks. Their adaptability, scalability, and integration with other techniques make them valuable across fields like artificial intelligence, logistics, and operations research. Balancing speed and solution quality makes heuristics indispensable for tackling real-world challenges where optimal solutions are often infeasible.[2]
A prominent category within heuristic methods is metaheuristics, which are higher-level strategies that effectively guide the search process to explore the solution space. These include genetic algorithms, simulated annealing, and particle swarm optimization. Metaheuristics are designed to balance exploration and exploitation, thereby enhancing the likelihood of identifying near-optimal solutions across diverse problem domains.[3]
Methodology & Classic Example
Optimization heuristics can be categorized into two broad classes based on their approach, focus, and application: heuristics and metaheuristics.
Heuristics
Heuristics are problem-solving methods that employ practical techniques to produce satisfactory solutions within a reasonable timeframe, especially when exact methods are impractical due to computational constraints. These algorithms utilize rules of thumb, educated guesses, and intuitive judgments to navigate complex search spaces efficiently. By focusing on the most promising areas of the search space, heuristics can quickly find good enough solutions without guaranteeing optimality.[4]
Metaheuristics
Metaheuristics are high-level optimization strategies designed to efficiently explore large and complex search spaces to find near-optimal solutions for challenging problems. They guide subordinate heuristics using concepts derived from artificial intelligence, biology, mathematics, and physical sciences to enhance performance.[5] Unlike problem-specific algorithms, metaheuristics are flexible and can be applied across various domains, making them valuable tools for solving real-world optimization challenges. By balancing the global search space exploration with the exploitation of promising local regions, metaheuristics effectively navigate complex landscapes to identify high-quality solutions.
Classic Example: Traveling Salesman Problem (TSP)
The traveling salesman problem states that given a set of n cities and the distances between each pair of cities, the objective is to find the shortest possible tour that starts and ends in the same city and visits each city exactly once.[6]
Heuristics solve the TSP by providing efficient and practical approaches to finding approximate solutions, especially for large instances where exact algorithms are computationally infeasible. Instead of exhaustively exploring all possible tours, heuristics focus on simplifying the problem using strategies like incremental solution building, iterative improvement, or probabilistic exploration. For example, constructive heuristics can create a feasible tour by starting at a city and iteratively adding the nearest unvisited city until all cities are covered. To reduce the total travel distance, local search heuristics refine an initial solution by making minor adjustments, such as swapping the order of cities.
Metaheuristics, such as simulated annealing or genetic algorithms, tackle the TSP by employing high-level strategies to explore the solution space more broadly and escape local optima. These methods balance global search space exploration with local exploitation of promising regions, allowing for a more thorough search for better solutions. They iteratively improve the solution while occasionally considering less favorable configurations to avoid being confined to suboptimal areas. By guiding the search process intelligently, metaheuristics adapt dynamically and effectively solve the TSP, often producing near-optimal solutions even for large-scale or complex problem instances.
Popular Heuristic Algorithms
Genetic Algorithm
The term Genetic Algorithm was first used by John Holland.[7] They are designed to mimic the Darwinian theory of evolution, which states that populations of species evolve to produce more complex organisms and fitter for survival on Earth. Genetic algorithms operate on string structures, like biological structures, which are evolving in time according to the rule of survival of the fittest by using a randomized yet structured information exchange. Thus, in every generation, a new set of strings is created, using parts of the fittest members of the old set.[8] The algorithm terminates when the satisfactory fitness level has been reached for the population or the maximum generations have been reached. The typical steps are[9]:
1. Choose an initial population of candidate solutions
2. Calculate the fitness, how well the solution is, of each individual
3. Perform crossover from the population. The operation is to randomly choose some pair of individuals like parents and exchange so parts from the parents to generate new individuals
4. Mutation is to randomly change some individuals to create other new individuals
5. Evaluate the fitness of the offspring
6. Select the survive individuals
7. Proceed from 3 if the termination criteria have not been reached
Tabu Search Algorithm
Tabu search (TS) is a heuristic algorithm created by Fred Glover[10] using a gradient-descent search with memory techniques to avoid cycling for determining an optimal solution. It does so by forbidding or penalizing moves that take the solution, in the next iteration, to points in the solution space previously visited. The algorithm spends some memory to keep a Tabu list of forbidden moves, which are the moves of the previous iterations or moves that might be considered unwanted. A general algorithm is as follows[11]:
1. Select an initial solution s0 ∈ S. Initialize the Tabu List L0 = ∅ and select a list tabu size. Establish k = 0.
2. Determine the neighborhood feasibility N(sk) that excludes inferior members of the tabu list Lk.
3. Select the next movement sk + 1 from N(Sk) or Lk if there is a better solution and update Lk + 1
4. Stop if a condition of termination is reached, else, k = k + 1 and return to 1
Example: The Classical Vehicle Routing Problem
Vehicle Routing Problems have very important applications in distribution management and have become some of the most studied problems in the combinatorial optimization literature. These include several Tabu Search implementations that currently rank among the most effective. The Classical Vehicle Routing Problem (CVRP) is the basic variant in that class of problems. It can formally be defined as follows. Let G = (V, A) be a graph where V is the vertex set and A is the arc set. One of the vertices represents the depot at which a fleet of identical vehicles of capacity Q is based, and the other vertices customers that need to be serviced. With each customer vertex vi are associated a demand qi and a service time ti. With each arc (vi, vj) of A are associated a cost cij and a travel time tij.[12] The CVRP consists of finding a set of routes such that:
1. Each route begins and ends at the depot
2. Each customer is visited exactly once by exactly one route
3. The total demand of the customers assigned to each route does not exceed Q
4. The total duration of each route (including travel and service times) does not exceed a specified value L
5. The total cost of the routes is minimized
A feasible solution for the problem thus consists of a partition of the customers into m groups, each of total demand no larger than Q, that are sequenced to yield routes (starting and ending at the depot) of duration no larger than L.
Simulated Annealing Algorithm
The Simulated Annealing Algorithm was developed by Kirkpatrick et. al. in 1983[13] and is based on the analogy of ideal crystals in thermodynamics. The annealing process in metallurgy can make particles arrange themselves in the position with minima potential as the temperature is slowly decreased. The Simulation Annealing algorithm mimics this mechanism and uses the objective function of an optimization problem instead of the energy of a material to arrive at a solution. A general algorithm is as follows[14] :
1. Fix initial temperature (T0)
2. Generate starting point x0 (this is the best point X* at present)
3. Generate randomly point XS (neighboring point)
4. Accept XS as X* (currently best solution) if an acceptance criterion is met. This must be such a condition that the probability of accepting a worse point is greater than zero, particularly at higher temperatures
5. If an equilibrium condition is satisfied, go to (6), otherwise jump back to (3).
6. If termination conditions are not met, decrease the temperature according to a certain cooling scheme and jump back to (1). If the termination conditions are satisfied, stop calculations accepting the current best value X* as the final (‘optimal’) solution.
Numerical Example: Knapsack Problem
One of the most common applications of the heuristic algorithm is the Knapsack Problem, in which a given set of items (each with a mass and a value) are grouped to have a maximum value while being under a certain mass limit. It uses the Greedy Approximation Algorithm to sort the items based on their value per unit mass and then includes the items with the highest value per unit mass if there is still space remaining.
Example
The following table specifies the weights and values per unit of five different products held in storage. The quantity of each product is unlimited. A plane with a weight capacity of 13 is to be used, for one trip only, to transport the products. We would like to know how many units of each product should be loaded onto the plane to maximize the value of goods shipped.
Product (i) |
Weight per unit (wi) | Value per unit (vi) |
---|---|---|
1 | 7 | 9 |
2 | 5 | 4 |
3 | 4 | 3 |
4 | 3 | 2 |
5 | 1 | 0.5 |
Solution:
(a) Stages:
We view each type of product as a stage, so there are 5 stages. We can also add a sixth stage representing the endpoint after deciding
(b) States:
We can view the remaining capacity as states, so there are 14 states in each stage: 0,1, 2, 3, …13
(c) Possible decisions at each stage:
Suppose we are in state s in stage n (n < 6), hence there are s capacity remaining. Then the possible number of items we can pack is:
j = 0, 1, …[s/wn]
For each such action j, we can have an arc going from the state s in stage n to the state n – j*wn in stage n + 1. For each arc in the graph, there is a corresponding benefit j*vn. We are trying to find a maximum benefit path from state 13 in stage 1, to stage 6.
(d) Optimization function:
Let fn(s) be the value of the maximum benefit possible with items of type n or greater using total capacity at most s
(e) Boundary conditions:
The sixth stage should have all zeros, that is, f6(s) = 0 for each s = 0,1, … 13
(f) Recurrence relation:
fn(s) = max {j*vn + fn+1(s – j*wn)}, j = 0, 1, …, [s/wn]
(g) Compute:
The solution will not show all the computations steps. Instead, only a few cases are given below to illustrate the idea.
- For stage 5, f5(s) = maxj=0, 1, …[s/1] {j*0.5 + 0} = 0.5s because given the all zero states in stage 6, the maximum possible value is to use up all the remaining s capacity.
- For stage 4, state 7,
f4(7) = maxj=0,1, …, [7/w4] = {j*v4 + f5(7 - w4*j)}
= max {0 + 3.5; 2 + 2; 4 + 0.5}
= 4.5
Using the recurrence relation above, we get the following table:
Unused Capacity
s |
f1(s) | Type 1
opt |
f2(s) | Type 2
opt |
f3(s) | Type 3
opt |
f4(s) | Type 4
opt |
f5(s) | Type 5
opt |
f6(s) |
---|---|---|---|---|---|---|---|---|---|---|---|
13 | 13.5 | 1 | 10 | 2 | 9.5 | 3 | 8.5 | 4 | 6.5 | 13 | 0 |
12 | 13 | 1 | 9 | 2 | 9 | 3 | 8 | 4 | 6 | 12 | 0 |
11 | 12 | 1 | 8.5 | 2 | 8 | 2 | 7 | 3 | 5.5 | 11 | 0 |
10 | 11 | 1 | 8 | 2 | 7 | 2 | 6.5 | 3 | 5 | 10 | 0 |
9 | 10 | 1 | 7 | 1 | 6.5 | 2 | 6 | 3 | 4.5 | 9 | 0 |
8 | 9.5 | 1 | 6 | 1 | 6 | 2 | 5 | 2 | 4 | 8 | 0 |
7 | 9 | 1 | 5 | 1 | 5 | 1 | 4.5 | 2 | 3.5 | 7 | 0 |
6 | 4.5 | 0 | 4.5 | 1 | 4 | 1 | 4 | 2 | 3 | 6 | 0 |
5 | 4 | 0 | 4 | 1 | 3.5 | 1 | 3 | 1 | 2.5 | 5 | 0 |
4 | 3 | 0 | 3 | 0 | 3 | 1 | 2.5 | 1 | 2 | 4 | 0 |
3 | 2 | 0 | 2 | 0 | 2 | 0 | 2 | 1 | 1.5 | 3 | 0 |
2 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 2 | 0 |
1 | 0.5 | 0 | 0.5 | 0 | 0.5 | 0 | 0.5 | 0 | 0.5 | 1 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Optimal solution: The maximum benefit possible is 13.5. Tracing forward to get the optimal solution: the optimal decision corresponding to the entry 13.5 for f1(1) is 1, therefore we should pack 1 unit of type 1. After that we have 6 capacity remaining, so look at f2(6) which is 4.5, corresponding to the optimal decision of packing 1 unit of type 2. After this, we have 6-5 = 1 capacity remaining, and f3(1) = f4(1) = 0, which means we are not able to pack any type 3 or type 4. Hence we go to stage 5 and find that f5(1) = 1, so we should pack 1 unit of type 5. This gives the entire optimal solution as can be seen in the table below:
Optimal solution | |
---|---|
Product (i) | Number of units |
1 | 1 |
2 | 1 |
5 | 1 |
Applications
Heuristic algorithms have become an important technique in solving current real-world problems. Its applications can range from optimizing the power flow in modern power systems[15] to groundwater pumping simulation models[16]. Heuristic optimization techniques are increasingly applied in environmental engineering applications as well such as the design of a multilayer sorptive barrier system for landfill liner.[17] Heuristic algorithms have also been applied in the fields of bioinformatics, computational biology, and systems biology.[18]
Conclusion
Heuristic algorithms are not a panacea, but they are handy tools to be used when the use of exact methods cannot be implemented. Heuristics can provide flexible techniques to solve hard problems with the advantage of simple implementation and low computational cost. Over the years, we have seen a progression in heuristics with the development of hybrid systems that combine selected features from various types of heuristic algorithms such as tabu search, simulated annealing, and genetic or evolutionary computing. Future research will continue to expand the capabilities of existing heuristics to solve complex real-world problems.
References
- ↑ 1. Kokash, N. (2008). An introduction to heuristic algorithms. Department of Informatics and Telecommunications (2005): pages 1-8. https://www.researchgate.net/publication/228573156.
- ↑ 2. Ezugwu, A.E., Shukla, A.K., Nath, R. et al. (2021). Metaheuristics: a comprehensive overview and classification along with bibliometric analysis. Artif Intell Rev 54, 4237–4316. https://doi.org/10.1007/s10462-020-09952-0
- ↑ 3. Salhi, S., Thompson, J. (2022). An Overview of Heuristics and Metaheuristics. In: Salhi, S., Boylan, J. (eds) The Palgrave Handbook of Operations Research. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-96935-6_11
- ↑ 4. Silver, E. (2004). An overview of heuristic solution methods. J Oper Res Soc 55, 936–956. https://doi.org/10.1057/palgrave.jors.2601758
- ↑ 5. Osman, I.H., Kelly, J.P. (1996). Meta-Heuristics: An Overview. In: Osman, I.H., Kelly, J.P. (eds) Meta-Heuristics. Springer, Boston, MA. https://doi.org/10.1007/978-1-4613-1361-8_1
- ↑ 6. Xiao, N. (2009). Evolutionary Algorithms, International Encyclopedia of Human Geography, Elsevier, Pages 660-665. https://doi.org/10.1016/B978-008044910-4.00525-3.
- ↑ J.H. Holland (1975) Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Michigan; re-issued by MIT Press (1992).
- ↑ Optimal design of heat exchanger networks, Editor(s): Wilfried Roetzel, Xing Luo, Dezhen Chen, Design and Operation of Heat Exchangers and their Networks, Academic Press, 2020, Pages 231-317, ISBN 9780128178942, https://doi.org/10.1016/B978-0-12-817894-2.00006-6.
- ↑ Wang FS., Chen LH. (2013) Genetic Algorithms. In: Dubitzky W., Wolkenhauer O., Cho KH., Yokota H. (eds) Encyclopedia of Systems Biology. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9863-7_412
- ↑ Fred Glover (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research. 13 (5): 533–549,https://doi.org/10.1016/0305-0548(86)90048-1
- ↑ Optimization of Preventive Maintenance Program for Imaging Equipment in Hospitals, Editor(s): Zdravko Kravanja, Miloš Bogataj, Computer-Aided Chemical Engineering, Elsevier, Volume 38, 2016, Pages 1833-1838, ISSN 1570-7946, ISBN 9780444634283, https://doi.org/10.1016/B978-0-444-63428-3.50310-6.
- ↑ Glover, Fred, and Gary A Kochenberger. Handbook Of Metaheuristics. Kluwer Academic Publishers, 2003.
- ↑ Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by Simulated Annealing. Science, 220(4598), 671-680. Retrieved November 25, 2020, from http://www.jstor.org/stable/1690046
- ↑ Brief review of static optimization methods, Editor(s): Stanisław Sieniutycz, Jacek Jeżowski, Energy Optimization in Process Systems and Fuel Cells (Third Edition), Elsevier, 2018, Pages 1-41, ISBN 9780081025574, https://doi.org/10.1016/B978-0-08-102557-4.00001-3.
- ↑ NIU, M., WAN, C. & Xu, Z. A review on applications of heuristic optimization algorithms for optimal power flow in modern power systems. J. Mod. Power Syst. Clean Energy 2, 289–297 (2014), https://doi.org/10.1007/s40565-014-0089-4
- ↑ J. L. Wang, Y. H. Lin and M. D. Lin, "Application of heuristic algorithms on groundwater pumping source identification problems," 2015 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 2015, pp. 858-862, https://doi.org/10.1109/IEEM.2015.7385770.
- ↑ Matott, L. Shawn, et al. “Application of Heuristic Optimization Techniques and Algorithm Tuning to Multilayered Sorptive Barrier Design.” Environmental Science & Technology, vol. 40, no. 20, 2006, pp. 6354–6360., https://doi.org/10.1021/es052560+.
- ↑ Larranaga P, Calvo B, Santana R, Bielza C, Galdiano J, Inza I, Lozano JA, Armananzas R, Santafe G, Perez A, Robles V (2006) Machine learning in bioinformatics. Brief Bioinform 7(1):86–112