Particle swarm optimization: Difference between revisions
mNo edit summary |
(Added introduction. Learning interface.) |
||
Line 4: | Line 4: | ||
=Introduction= | |||
Particle Swarm Optimization (PSO) is inspired by nature and groups or swarms of natural creatures. It uses multiple “particles” distributed across a solution space to slowly converge upon a global, optimized solution. This algorithm does not require calculation of the function’s gradient or slope, instead using simple weights and the contribution of the rest of the swarm to provide a “fitness” value [2] to leading to a technique dependent only on simple calculations and a number of starting points. | |||
The PSO algorithm was formulated and initially presented by Kennedy and Eberhart specifically to find solutions to discontinuous, but non-differentiable functions. The concept was presented at the IEEE conference on neural networks in 1995. The pair later published a book Swarm Intelligence in 2001 that further expanded on the concept [14], [23]. Others have summarized the algorithm [1], [7], [15]. This algorithm does not leverage formal methodology or mathematical rigor to identify an ideal point but instead uses a heuristic approach to explore a wide range of starting locations and conditions. These features allow the PSO algorithm to efficiently optimize complex functions. | |||
The biological analogy for a PSO algorithm is a flock of birds, a school of fish, or a colony of ants trying to find a food resource. Each animal, or particle, is an independent entity that is conducting its own search routine for an optimal result. However, the particles will also be influenced and guided by motions of the larger swarm. The results any particle has already seen, as well as the influence of the larger swarm helps all the particles converge to an optimal solution. | |||
In a more technical dissection, a PSO algorithm must be initially set-up with how many particles and iterations the user wants to use. More particles improve the convergence likelihood and help ensure the global optimum can be found due to a more complete search at the cost of computing resources required. The PSO algorithm allows a user to set different “weights”, step sizes, and speeds for the particles to control how influenced each particle is by its past points (cognitive) and by the other nearby points (social) in the swarm and how large of steps will be taken. These weights allow algorithm adjustment to tailor the performance and computational requirements. | |||
The goals of studying this topic are to learn more about an algorithm capable of solving nondifferentiable functions that are relevant to modern problems. Many optimization algorithms use the gradient function liberally, so it is interesting to see an algorithm that does not require gradient computations. Additionally, the algorithm is still in active use in many fields [15], [24]. PSO is regularly used to solve optimization problems in the gas and oil industry, antenna design [25], and electric power distribution [10]. As functions get very complex in the real-world, the PSO algorithm is still capable of performing an effective search. Relevance to modern optimization problems make PSO an interesting research area. | |||
\section{Another Section} | \section{Another Section} | ||
\subsection{Subsection} | \subsection{Subsection} |
Revision as of 22:45, 10 December 2024
Author: David Schluneker (dms565), Thomas Ploetz (tep52), Michael Sorensen (mds385), Amrith Kumaar (ak836), Andrew Duffy (ajd296) (ChemE 6800 Fall 2024)
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
Introduction
Particle Swarm Optimization (PSO) is inspired by nature and groups or swarms of natural creatures. It uses multiple “particles” distributed across a solution space to slowly converge upon a global, optimized solution. This algorithm does not require calculation of the function’s gradient or slope, instead using simple weights and the contribution of the rest of the swarm to provide a “fitness” value [2] to leading to a technique dependent only on simple calculations and a number of starting points.
The PSO algorithm was formulated and initially presented by Kennedy and Eberhart specifically to find solutions to discontinuous, but non-differentiable functions. The concept was presented at the IEEE conference on neural networks in 1995. The pair later published a book Swarm Intelligence in 2001 that further expanded on the concept [14], [23]. Others have summarized the algorithm [1], [7], [15]. This algorithm does not leverage formal methodology or mathematical rigor to identify an ideal point but instead uses a heuristic approach to explore a wide range of starting locations and conditions. These features allow the PSO algorithm to efficiently optimize complex functions.
The biological analogy for a PSO algorithm is a flock of birds, a school of fish, or a colony of ants trying to find a food resource. Each animal, or particle, is an independent entity that is conducting its own search routine for an optimal result. However, the particles will also be influenced and guided by motions of the larger swarm. The results any particle has already seen, as well as the influence of the larger swarm helps all the particles converge to an optimal solution.
In a more technical dissection, a PSO algorithm must be initially set-up with how many particles and iterations the user wants to use. More particles improve the convergence likelihood and help ensure the global optimum can be found due to a more complete search at the cost of computing resources required. The PSO algorithm allows a user to set different “weights”, step sizes, and speeds for the particles to control how influenced each particle is by its past points (cognitive) and by the other nearby points (social) in the swarm and how large of steps will be taken. These weights allow algorithm adjustment to tailor the performance and computational requirements.
The goals of studying this topic are to learn more about an algorithm capable of solving nondifferentiable functions that are relevant to modern problems. Many optimization algorithms use the gradient function liberally, so it is interesting to see an algorithm that does not require gradient computations. Additionally, the algorithm is still in active use in many fields [15], [24]. PSO is regularly used to solve optimization problems in the gas and oil industry, antenna design [25], and electric power distribution [10]. As functions get very complex in the real-world, the PSO algorithm is still capable of performing an effective search. Relevance to modern optimization problems make PSO an interesting research area.
\section{Another Section}
\subsection{Subsection}