<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://optimization.cbe.cornell.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fall2024+Wiki+Team26</id>
	<title>Cornell University Computational Optimization Open Textbook - Optimization Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://optimization.cbe.cornell.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fall2024+Wiki+Team26"/>
	<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=Special:Contributions/Fall2024_Wiki_Team26"/>
	<updated>2026-05-04T12:21:59Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7718</id>
		<title>Differential evolution</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7718"/>
		<updated>2024-12-16T03:15:59Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: /* Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Author: Rohit Kumar (rk787) (ChemE 6800 Fall 2024)&lt;br /&gt;
&lt;br /&gt;
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Differential Evolution (DE) is a robust and efficient optimization algorithm widely used for solving non-linear, non-differentiable, and multimodal optimization problems. It was first introduced by R. Storn and K. Price in their seminal work on global optimization over continuous spaces &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;, where they described DE as:-&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous-space functions.&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A heuristic is any approximation method for a solution to a problem capable of producing satisfactory solutions within tolerable amounts of time when classic methods cannot find an exact answer or work quite slowly. Unlike the Gradient Descent algorithm, the optimization problem does not need to be differentiable.&lt;br /&gt;
&lt;br /&gt;
The key idea is to generate a population of candidate solutions through successive generations using vector differences for deviations. This approach allows DE to adapt its search methodology to the objective function, making it particularly effective for non-linear, non-differentiable, and multimodal optimization problems.&lt;br /&gt;
&lt;br /&gt;
The algorithm is renowned for its simplicity and efficiency. Its primary advantages include:&lt;br /&gt;
&lt;br /&gt;
* Requires minimal parameter tuning.&lt;br /&gt;
* High robustness.&lt;br /&gt;
* Ability to avoid local optima and converge toward global solutions.&lt;br /&gt;
&lt;br /&gt;
In DE, the currently evolving population of possible solutions is represented as a vector of real-valued numbers. The algorithm attempts to enhance these solutions through a series of iterations built upon three main steps:&lt;br /&gt;
&lt;br /&gt;
* Mutation&lt;br /&gt;
* Crossover&lt;br /&gt;
* Selection&lt;br /&gt;
&lt;br /&gt;
DE has become highly popular for applications in such fields as engineering design, machine learning, signal processing, and financial modeling. Despite its simplicity, Differential Evolution is competitive against many other optimization techniques, often performing better on challenging problems.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Discussion ==&lt;br /&gt;
Differential Evolution operates through three main steps: initialization, mutation, crossover, and selection, as outlined in optimization literature &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initialization ===&lt;br /&gt;
The initial population is generated randomly within the bounds of the search space: &amp;lt;math&amp;gt;x_{i,j} \sim U(x_j^{\text{min}}, x_j^{\text{max}})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{i,j}&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j^{\text{th}}&amp;lt;/math&amp;gt; parameter of the &amp;lt;math&amp;gt;i^{\text{th}}&amp;lt;/math&amp;gt; individual.&lt;br /&gt;
* &amp;lt;math&amp;gt;U(a, b)&amp;lt;/math&amp;gt; is a uniform random distribution.&lt;br /&gt;
&lt;br /&gt;
=== Mutation ===&lt;br /&gt;
A mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; is generated by combining three randomly chosen vectors: &amp;lt;math&amp;gt;v_i = x_{r1} + F \cdot (x_{r2} - x_{r3})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is the mutation factor.&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{r1}, x_{r2}, x_{r3}&amp;lt;/math&amp;gt; are distinct vectors from the population.&lt;br /&gt;
&lt;br /&gt;
=== Crossover ===&lt;br /&gt;
A trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; is created by mixing the mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; and the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;&lt;br /&gt;
u_{i,j} =&lt;br /&gt;
\begin{cases} &lt;br /&gt;
v_{i,j}, &amp;amp; \text{if } \text{rand}(0, 1) \leq CR \text{ or } j = j_{\text{rand}}, \\&lt;br /&gt;
x_{i,j}, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;CR&amp;lt;/math&amp;gt; is the crossover probability.&lt;br /&gt;
* &amp;lt;math&amp;gt;j_{\text{rand}}&amp;lt;/math&amp;gt; is a randomly chosen index.&lt;br /&gt;
&lt;br /&gt;
=== Selection ===&lt;br /&gt;
The trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; replaces the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; in the population if it yields a better objective function value: &amp;lt;math&amp;gt;&lt;br /&gt;
x_i =&lt;br /&gt;
\begin{cases}&lt;br /&gt;
u_i, &amp;amp; \text{if } f(u_i) \leq f(x_i), \\&lt;br /&gt;
x_i, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pseudocode ===&lt;br /&gt;
The pseudocode of Differential Evolution is shown below:&amp;lt;pre&amp;gt;&lt;br /&gt;
Initialize population P randomly&lt;br /&gt;
While termination criterion not met:&lt;br /&gt;
    For each individual x_i in P:&lt;br /&gt;
        Generate mutant vector v_i&lt;br /&gt;
        Perform crossover to create trial vector u_i&lt;br /&gt;
        If f(u_i) &amp;lt; f(x_i): Replace x_i with u_i&lt;br /&gt;
Return the best solution&lt;br /&gt;
&amp;lt;/pre&amp;gt;The pseudocode presented in this article is adapted from S. Das and P.N. Suganthan&#039;s survey on the state-of-the-art in DE &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Numerical Example ==&lt;br /&gt;
We aim to minimize the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;, a common benchmark for optimization algorithms &amp;lt;ref&amp;gt;J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. The example uses key principles of DE, including mutation and crossover, as described in foundational texts like R. Storn and K. Price’s work &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initial Setup ===&lt;br /&gt;
The initial population and fitness values are: &amp;lt;math&amp;gt;P = \{-7, 3, -2, 6, 8\}, \quad f(P) = \{49, 9, 4, 36, 64\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iteration 1 ===&lt;br /&gt;
&#039;&#039;&#039;Individual 1&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -7&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_1 = 3 + 0.8 \cdot (-2 - 8) = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_1) = 25&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_1) &amp;lt; f(x_1)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;25 &amp;lt; 49&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_1&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 2&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 3&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_2 = -2 + 0.8 \cdot (6 - 8) = -3.6&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_2 = 3&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_2) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_2) = f(x_2)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_2 = 3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 3&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -2&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_3 = 6 + 0.8 \cdot (8 - (-7)) = 18&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_3 = -2&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_3) = 4&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_3) = f(x_3)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_3 = -2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 4&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 6&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;v_4 = -7 + 0.8 \cdot (3 - (-2)) = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_4) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_4) &amp;lt; f(x_4)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;9 &amp;lt; 36&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_4&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_4&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 5&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 8&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_5 = 3 + 0.8 \cdot (-7 - (-2)) = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_5) = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_5) &amp;lt; f(x_5)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;1 &amp;lt; 64&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_5&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After the first iteration, the updated population is: &amp;lt;math&amp;gt;P = \{-5, 3, -2, -3, -1\}, \quad f(P) = \{25, 9, 4, 9, 1\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterations 2–5 ===&lt;br /&gt;
The process is repeated for four more iterations. For each individual, the mutant vector, trial vector, and fitness are calculated. Individuals in the population are replaced whenever &amp;lt;math&amp;gt;f(u_i) &amp;lt; f(x_i)&amp;lt;/math&amp;gt;. Over successive iterations, the population evolves toward the minimum.&lt;br /&gt;
&lt;br /&gt;
=== Final Population ===&lt;br /&gt;
After five iterations, the population converges to: &amp;lt;math&amp;gt;P = \{0, 0, 0, 0, 0\}, \quad f(P) = \{0, 0, 0, 0, 0\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Visualization ==&lt;br /&gt;
The progression of the population during the Differential Evolution process is illustrated in the figures below. Each figure shows the distribution of the population at a specific iteration, demonstrating how the algorithm converges toward the global minimum of the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 1&#039;&#039;&#039;: Initial population distribution before any iterations.  [https://drive.google.com/file/d/12bfDfZCRMy0a85zDrEXwsifuWDf0Wcf2/view?usp=share_link View Figure 1: Initial population distribution before any iterations.]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 2&#039;&#039;&#039;: Population after the first iteration, showing improvement toward the global minimum.  [https://drive.google.com/file/d/1A-miB_wZ9I13dNli09vaL81s_DkkYFAE/view?usp=share_link View Figure 2: Population after the first iteration, showing improvement toward the global minimum.]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 3&#039;&#039;&#039;: Population after the second iteration, with further refinement of solutions.  [https://drive.google.com/file/d/1hMAQ_Cxbe3JFImiCzgnkKh2lnk2D1R7k/view?usp=share_link View Figure 3: Population after the second iteration, with further refinement of solutions.]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 4&#039;&#039;&#039;: Population after the fourth iteration, nearing convergence.  [https://drive.google.com/file/d/132i2nEUx1PP6gjCv17TPMMjtCvCxi1VT/view?usp=sharing View Figure 4: Population after the fourth iteration, nearing convergence.]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 5&#039;&#039;&#039;: Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.  [https://drive.google.com/file/d/1itJ6gvHnp_WdOeHAWXBHM_F2IEYm9WkZ/view?usp=sharing View Figure 5: Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
Differential Evolution has been applied across various domains, including:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Engineering Design&#039;&#039;&#039;: Applications include minimizing the weight of tension/compression springs, following the approaches outlined by R. Storn and K. Price &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Water Resource Management&#039;&#039;&#039;: Optimization strategies for reservoirs, as seen in recent research &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Machine Learning&#039;&#039;&#039;: Parameter optimization for neural networks &amp;lt;ref&amp;gt;T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Medical Applications&#039;&#039;&#039;: DE’s effectiveness in radiotherapy optimization is discussed in Abbass (2002) &amp;lt;ref&amp;gt;A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
Several software libraries and frameworks have integrated Differential Evolution into their core optimization processes:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ALGLIB&#039;&#039;&#039;: A powerful library in C++, C#, and Java, capable of solving complex constrained and unconstrained optimization problems.&lt;br /&gt;
* &#039;&#039;&#039;DEAP (Distributed Evolutionary Algorithms in Python)&#039;&#039;&#039;: A Python framework for rapid prototyping and testing of evolutionary algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;ECJ&#039;&#039;&#039;: An open-source evolutionary computation system implemented in Java, supporting a variety of algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;MOEA Framework&#039;&#039;&#039;: A Java library dedicated to multi-objective optimization, offering robust DE implementations.&lt;br /&gt;
&lt;br /&gt;
Differential Evolution’s adaptability and effectiveness have made it an indispensable tool for tackling challenging optimization problems across these diverse domains.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
Differential Evolution’s ability to handle a wide range of optimization problems has made it a preferred tool in various domains &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. Despite its strengths, opportunities for improvement exist, including faster convergence and hybrid models &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout its application in engineering design, water resource management, machine learning, and medical fields, DE has demonstrated its effectiveness in solving complex and constrained problems. Its capacity for global convergence and minimal parameter tuning requirements further enhance its appeal for real-world problems.&lt;br /&gt;
&lt;br /&gt;
Despite its strengths, there are areas for future research and improvement:&lt;br /&gt;
&lt;br /&gt;
* Enhancing the convergence speed of the algorithm.&lt;br /&gt;
* Developing hybrid approaches by combining DE with other optimization techniques.&lt;br /&gt;
* Extending its application to emerging domains such as quantum computing and autonomous systems.&lt;br /&gt;
* Improving parameter tuning methods and software implementations to make DE more accessible and efficient for practitioners.&lt;br /&gt;
&lt;br /&gt;
In summary, Differential Evolution continues to be a reliable and powerful tool for solving challenging optimization problems and holds promise for further advancements in both methodology and applications.&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
Below is a list of references used to support the content in this article:&lt;br /&gt;
&lt;br /&gt;
# R.J. Vanderbei, &#039;&#039;Linear Programming: Foundations and Extensions&#039;&#039;. Springer, 2008. Available at: [http://link.springer.com/book/10.1007/978-0-387-74388-2 SpringerLink].&lt;br /&gt;
# S. Boyd, L. Vandenberghe, &#039;&#039;Convex Optimization&#039;&#039;. Cambridge University Press, 2009. Available at: [http://www.stanford.edu/~boyd/cvxbook/ Stanford University].&lt;br /&gt;
# J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999. Available at: [http://link.springer.com/book/10.1007/b98874 SpringerLink].&lt;br /&gt;
# J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011. Available at: [http://link.springer.com/book/10.1007/978-1-4614-0237-4 SpringerLink].&lt;br /&gt;
# A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009. Available at: [http://www2.isye.gatech.edu/~nemirovs/FullBookDec11.pdf Georgia Institute of Technology].&lt;br /&gt;
# L.A. Wolsey, &#039;&#039;Integer Programming&#039;&#039;. Wiley, 1998.&lt;br /&gt;
# L.T. Biegler, I.E. Grossmann, A.W. Westerberg, &#039;&#039;Systematic Methods of Chemical Process Design&#039;&#039;. Prentice Hall Press, 1997.&lt;br /&gt;
# L.T. Biegler, &#039;&#039;Nonlinear Programming: Concepts, Algorithms and Applications to Chemical Engineering&#039;&#039;. SIAM, 2010.&lt;br /&gt;
# T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, &#039;&#039;Optimization of Chemical Processes&#039;&#039;. McGraw-Hill, 2001.&lt;br /&gt;
# T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009. Available at: [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Stanford University].&lt;br /&gt;
# K. Murphy, &#039;&#039;Machine Learning: A Probabilistic Perspective&#039;&#039;. MIT Press, 2012.&lt;br /&gt;
&lt;br /&gt;
Additional references for application examples:&lt;br /&gt;
&lt;br /&gt;
* S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&lt;br /&gt;
* R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&lt;br /&gt;
* A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7711</id>
		<title>Differential evolution</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7711"/>
		<updated>2024-12-16T03:10:12Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: /* Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Author: Rohit Kumar (rk787) (ChemE 6800 Fall 2024)&lt;br /&gt;
&lt;br /&gt;
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Differential Evolution (DE) is a robust and efficient optimization algorithm widely used for solving non-linear, non-differentiable, and multimodal optimization problems. It was first introduced by R. Storn and K. Price in their seminal work on global optimization over continuous spaces &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;, where they described DE as:-&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous-space functions.&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A heuristic is any approximation method for a solution to a problem capable of producing satisfactory solutions within tolerable amounts of time when classic methods cannot find an exact answer or work quite slowly. Unlike the Gradient Descent algorithm, the optimization problem does not need to be differentiable.&lt;br /&gt;
&lt;br /&gt;
The key idea is to generate a population of candidate solutions through successive generations using vector differences for deviations. This approach allows DE to adapt its search methodology to the objective function, making it particularly effective for non-linear, non-differentiable, and multimodal optimization problems.&lt;br /&gt;
&lt;br /&gt;
The algorithm is renowned for its simplicity and efficiency. Its primary advantages include:&lt;br /&gt;
&lt;br /&gt;
* Requires minimal parameter tuning.&lt;br /&gt;
* High robustness.&lt;br /&gt;
* Ability to avoid local optima and converge toward global solutions.&lt;br /&gt;
&lt;br /&gt;
In DE, the currently evolving population of possible solutions is represented as a vector of real-valued numbers. The algorithm attempts to enhance these solutions through a series of iterations built upon three main steps:&lt;br /&gt;
&lt;br /&gt;
* Mutation&lt;br /&gt;
* Crossover&lt;br /&gt;
* Selection&lt;br /&gt;
&lt;br /&gt;
DE has become highly popular for applications in such fields as engineering design, machine learning, signal processing, and financial modeling. Despite its simplicity, Differential Evolution is competitive against many other optimization techniques, often performing better on challenging problems.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Discussion ==&lt;br /&gt;
Differential Evolution operates through three main steps: initialization, mutation, crossover, and selection, as outlined in optimization literature &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initialization ===&lt;br /&gt;
The initial population is generated randomly within the bounds of the search space: &amp;lt;math&amp;gt;x_{i,j} \sim U(x_j^{\text{min}}, x_j^{\text{max}})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{i,j}&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j^{\text{th}}&amp;lt;/math&amp;gt; parameter of the &amp;lt;math&amp;gt;i^{\text{th}}&amp;lt;/math&amp;gt; individual.&lt;br /&gt;
* &amp;lt;math&amp;gt;U(a, b)&amp;lt;/math&amp;gt; is a uniform random distribution.&lt;br /&gt;
&lt;br /&gt;
=== Mutation ===&lt;br /&gt;
A mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; is generated by combining three randomly chosen vectors: &amp;lt;math&amp;gt;v_i = x_{r1} + F \cdot (x_{r2} - x_{r3})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is the mutation factor.&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{r1}, x_{r2}, x_{r3}&amp;lt;/math&amp;gt; are distinct vectors from the population.&lt;br /&gt;
&lt;br /&gt;
=== Crossover ===&lt;br /&gt;
A trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; is created by mixing the mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; and the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;&lt;br /&gt;
u_{i,j} =&lt;br /&gt;
\begin{cases} &lt;br /&gt;
v_{i,j}, &amp;amp; \text{if } \text{rand}(0, 1) \leq CR \text{ or } j = j_{\text{rand}}, \\&lt;br /&gt;
x_{i,j}, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;CR&amp;lt;/math&amp;gt; is the crossover probability.&lt;br /&gt;
* &amp;lt;math&amp;gt;j_{\text{rand}}&amp;lt;/math&amp;gt; is a randomly chosen index.&lt;br /&gt;
&lt;br /&gt;
=== Selection ===&lt;br /&gt;
The trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; replaces the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; in the population if it yields a better objective function value: &amp;lt;math&amp;gt;&lt;br /&gt;
x_i =&lt;br /&gt;
\begin{cases}&lt;br /&gt;
u_i, &amp;amp; \text{if } f(u_i) \leq f(x_i), \\&lt;br /&gt;
x_i, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pseudocode ===&lt;br /&gt;
The pseudocode of Differential Evolution is shown below:&amp;lt;pre&amp;gt;&lt;br /&gt;
Initialize population P randomly&lt;br /&gt;
While termination criterion not met:&lt;br /&gt;
    For each individual x_i in P:&lt;br /&gt;
        Generate mutant vector v_i&lt;br /&gt;
        Perform crossover to create trial vector u_i&lt;br /&gt;
        If f(u_i) &amp;lt; f(x_i): Replace x_i with u_i&lt;br /&gt;
Return the best solution&lt;br /&gt;
&amp;lt;/pre&amp;gt;The pseudocode presented in this article is adapted from S. Das and P.N. Suganthan&#039;s survey on the state-of-the-art in DE &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Numerical Example ==&lt;br /&gt;
We aim to minimize the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;, a common benchmark for optimization algorithms &amp;lt;ref&amp;gt;J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. The example uses key principles of DE, including mutation and crossover, as described in foundational texts like R. Storn and K. Price’s work &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initial Setup ===&lt;br /&gt;
The initial population and fitness values are: &amp;lt;math&amp;gt;P = \{-7, 3, -2, 6, 8\}, \quad f(P) = \{49, 9, 4, 36, 64\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iteration 1 ===&lt;br /&gt;
&#039;&#039;&#039;Individual 1&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -7&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_1 = 3 + 0.8 \cdot (-2 - 8) = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_1) = 25&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_1) &amp;lt; f(x_1)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;25 &amp;lt; 49&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_1&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 2&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 3&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_2 = -2 + 0.8 \cdot (6 - 8) = -3.6&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_2 = 3&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_2) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_2) = f(x_2)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_2 = 3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 3&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -2&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_3 = 6 + 0.8 \cdot (8 - (-7)) = 18&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_3 = -2&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_3) = 4&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_3) = f(x_3)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_3 = -2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 4&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 6&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;v_4 = -7 + 0.8 \cdot (3 - (-2)) = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_4) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_4) &amp;lt; f(x_4)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;9 &amp;lt; 36&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_4&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_4&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 5&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 8&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_5 = 3 + 0.8 \cdot (-7 - (-2)) = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_5) = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_5) &amp;lt; f(x_5)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;1 &amp;lt; 64&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_5&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After the first iteration, the updated population is: &amp;lt;math&amp;gt;P = \{-5, 3, -2, -3, -1\}, \quad f(P) = \{25, 9, 4, 9, 1\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterations 2–5 ===&lt;br /&gt;
The process is repeated for four more iterations. For each individual, the mutant vector, trial vector, and fitness are calculated. Individuals in the population are replaced whenever &amp;lt;math&amp;gt;f(u_i) &amp;lt; f(x_i)&amp;lt;/math&amp;gt;. Over successive iterations, the population evolves toward the minimum.&lt;br /&gt;
&lt;br /&gt;
=== Final Population ===&lt;br /&gt;
After five iterations, the population converges to: &amp;lt;math&amp;gt;P = \{0, 0, 0, 0, 0\}, \quad f(P) = \{0, 0, 0, 0, 0\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Visualization ==&lt;br /&gt;
The progression of the population during the Differential Evolution process is illustrated in the figures below. Each figure shows the distribution of the population at a specific iteration, demonstrating how the algorithm converges toward the global minimum of the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 1&#039;&#039;&#039;: Initial population distribution before any iterations.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-001.png|link=File:Ezgif-frame-001.png|center|thumb|600x600px|Initial population distribution before any iterations.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 2&#039;&#039;&#039;: Population after the first iteration, showing improvement toward the global minimum.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-006.png|link=File:Ezgif-frame-006.png|center|thumb|600x600px|Population after the first iteration, showing improvement toward the global minimum.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 3&#039;&#039;&#039;: Population after the second iteration, with further refinement of solutions.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-011.png|link=File:Ezgif-frame-011.png|center|thumb|600x600px|Population after the second iteration, with further refinement of solutions.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 4&#039;&#039;&#039;: Population after the fourth iteration, nearing convergence.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-018.png|link=File:Ezgif-frame-018.png|center|thumb|600x600px|Population after the fourth iteration, nearing convergence.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 5&#039;&#039;&#039;: Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-025.png|link=File:Ezgif-frame-025.png|center|thumb|600x600px|Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
Differential Evolution has been applied across various domains, including:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Engineering Design&#039;&#039;&#039;: Applications include minimizing the weight of tension/compression springs, following the approaches outlined by R. Storn and K. Price &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Water Resource Management&#039;&#039;&#039;: Optimization strategies for reservoirs, as seen in recent research &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Machine Learning&#039;&#039;&#039;: Parameter optimization for neural networks &amp;lt;ref&amp;gt;T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Medical Applications&#039;&#039;&#039;: DE’s effectiveness in radiotherapy optimization is discussed in Abbass (2002) &amp;lt;ref&amp;gt;A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
Several software libraries and frameworks have integrated Differential Evolution into their core optimization processes:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ALGLIB&#039;&#039;&#039;: A powerful library in C++, C#, and Java, capable of solving complex constrained and unconstrained optimization problems.&lt;br /&gt;
* &#039;&#039;&#039;DEAP (Distributed Evolutionary Algorithms in Python)&#039;&#039;&#039;: A Python framework for rapid prototyping and testing of evolutionary algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;ECJ&#039;&#039;&#039;: An open-source evolutionary computation system implemented in Java, supporting a variety of algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;MOEA Framework&#039;&#039;&#039;: A Java library dedicated to multi-objective optimization, offering robust DE implementations.&lt;br /&gt;
&lt;br /&gt;
Differential Evolution’s adaptability and effectiveness have made it an indispensable tool for tackling challenging optimization problems across these diverse domains.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
Differential Evolution’s ability to handle a wide range of optimization problems has made it a preferred tool in various domains &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. Despite its strengths, opportunities for improvement exist, including faster convergence and hybrid models &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout its application in engineering design, water resource management, machine learning, and medical fields, DE has demonstrated its effectiveness in solving complex and constrained problems. Its capacity for global convergence and minimal parameter tuning requirements further enhance its appeal for real-world problems.&lt;br /&gt;
&lt;br /&gt;
Despite its strengths, there are areas for future research and improvement:&lt;br /&gt;
&lt;br /&gt;
* Enhancing the convergence speed of the algorithm.&lt;br /&gt;
* Developing hybrid approaches by combining DE with other optimization techniques.&lt;br /&gt;
* Extending its application to emerging domains such as quantum computing and autonomous systems.&lt;br /&gt;
* Improving parameter tuning methods and software implementations to make DE more accessible and efficient for practitioners.&lt;br /&gt;
&lt;br /&gt;
In summary, Differential Evolution continues to be a reliable and powerful tool for solving challenging optimization problems and holds promise for further advancements in both methodology and applications.&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
Below is a list of references used to support the content in this article:&lt;br /&gt;
&lt;br /&gt;
# R.J. Vanderbei, &#039;&#039;Linear Programming: Foundations and Extensions&#039;&#039;. Springer, 2008. Available at: [http://link.springer.com/book/10.1007/978-0-387-74388-2 SpringerLink].&lt;br /&gt;
# S. Boyd, L. Vandenberghe, &#039;&#039;Convex Optimization&#039;&#039;. Cambridge University Press, 2009. Available at: [http://www.stanford.edu/~boyd/cvxbook/ Stanford University].&lt;br /&gt;
# J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999. Available at: [http://link.springer.com/book/10.1007/b98874 SpringerLink].&lt;br /&gt;
# J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011. Available at: [http://link.springer.com/book/10.1007/978-1-4614-0237-4 SpringerLink].&lt;br /&gt;
# A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009. Available at: [http://www2.isye.gatech.edu/~nemirovs/FullBookDec11.pdf Georgia Institute of Technology].&lt;br /&gt;
# L.A. Wolsey, &#039;&#039;Integer Programming&#039;&#039;. Wiley, 1998.&lt;br /&gt;
# L.T. Biegler, I.E. Grossmann, A.W. Westerberg, &#039;&#039;Systematic Methods of Chemical Process Design&#039;&#039;. Prentice Hall Press, 1997.&lt;br /&gt;
# L.T. Biegler, &#039;&#039;Nonlinear Programming: Concepts, Algorithms and Applications to Chemical Engineering&#039;&#039;. SIAM, 2010.&lt;br /&gt;
# T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, &#039;&#039;Optimization of Chemical Processes&#039;&#039;. McGraw-Hill, 2001.&lt;br /&gt;
# T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009. Available at: [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Stanford University].&lt;br /&gt;
# K. Murphy, &#039;&#039;Machine Learning: A Probabilistic Perspective&#039;&#039;. MIT Press, 2012.&lt;br /&gt;
&lt;br /&gt;
Additional references for application examples:&lt;br /&gt;
&lt;br /&gt;
* S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&lt;br /&gt;
* R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&lt;br /&gt;
* A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7709</id>
		<title>Differential evolution</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7709"/>
		<updated>2024-12-16T03:06:03Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: /* Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Author: Rohit Kumar (rk787) (ChemE 6800 Fall 2024)&lt;br /&gt;
&lt;br /&gt;
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Differential Evolution (DE) is a robust and efficient optimization algorithm widely used for solving non-linear, non-differentiable, and multimodal optimization problems. It was first introduced by R. Storn and K. Price in their seminal work on global optimization over continuous spaces &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;, where they described DE as:-&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous-space functions.&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A heuristic is any approximation method for a solution to a problem capable of producing satisfactory solutions within tolerable amounts of time when classic methods cannot find an exact answer or work quite slowly. Unlike the Gradient Descent algorithm, the optimization problem does not need to be differentiable.&lt;br /&gt;
&lt;br /&gt;
The key idea is to generate a population of candidate solutions through successive generations using vector differences for deviations. This approach allows DE to adapt its search methodology to the objective function, making it particularly effective for non-linear, non-differentiable, and multimodal optimization problems.&lt;br /&gt;
&lt;br /&gt;
The algorithm is renowned for its simplicity and efficiency. Its primary advantages include:&lt;br /&gt;
&lt;br /&gt;
* Requires minimal parameter tuning.&lt;br /&gt;
* High robustness.&lt;br /&gt;
* Ability to avoid local optima and converge toward global solutions.&lt;br /&gt;
&lt;br /&gt;
In DE, the currently evolving population of possible solutions is represented as a vector of real-valued numbers. The algorithm attempts to enhance these solutions through a series of iterations built upon three main steps:&lt;br /&gt;
&lt;br /&gt;
* Mutation&lt;br /&gt;
* Crossover&lt;br /&gt;
* Selection&lt;br /&gt;
&lt;br /&gt;
DE has become highly popular for applications in such fields as engineering design, machine learning, signal processing, and financial modeling. Despite its simplicity, Differential Evolution is competitive against many other optimization techniques, often performing better on challenging problems.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Discussion ==&lt;br /&gt;
Differential Evolution operates through three main steps: initialization, mutation, crossover, and selection, as outlined in optimization literature &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initialization ===&lt;br /&gt;
The initial population is generated randomly within the bounds of the search space: &amp;lt;math&amp;gt;x_{i,j} \sim U(x_j^{\text{min}}, x_j^{\text{max}})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{i,j}&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j^{\text{th}}&amp;lt;/math&amp;gt; parameter of the &amp;lt;math&amp;gt;i^{\text{th}}&amp;lt;/math&amp;gt; individual.&lt;br /&gt;
* &amp;lt;math&amp;gt;U(a, b)&amp;lt;/math&amp;gt; is a uniform random distribution.&lt;br /&gt;
&lt;br /&gt;
=== Mutation ===&lt;br /&gt;
A mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; is generated by combining three randomly chosen vectors: &amp;lt;math&amp;gt;v_i = x_{r1} + F \cdot (x_{r2} - x_{r3})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is the mutation factor.&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{r1}, x_{r2}, x_{r3}&amp;lt;/math&amp;gt; are distinct vectors from the population.&lt;br /&gt;
&lt;br /&gt;
=== Crossover ===&lt;br /&gt;
A trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; is created by mixing the mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; and the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;&lt;br /&gt;
u_{i,j} =&lt;br /&gt;
\begin{cases} &lt;br /&gt;
v_{i,j}, &amp;amp; \text{if } \text{rand}(0, 1) \leq CR \text{ or } j = j_{\text{rand}}, \\&lt;br /&gt;
x_{i,j}, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;CR&amp;lt;/math&amp;gt; is the crossover probability.&lt;br /&gt;
* &amp;lt;math&amp;gt;j_{\text{rand}}&amp;lt;/math&amp;gt; is a randomly chosen index.&lt;br /&gt;
&lt;br /&gt;
=== Selection ===&lt;br /&gt;
The trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; replaces the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; in the population if it yields a better objective function value: &amp;lt;math&amp;gt;&lt;br /&gt;
x_i =&lt;br /&gt;
\begin{cases}&lt;br /&gt;
u_i, &amp;amp; \text{if } f(u_i) \leq f(x_i), \\&lt;br /&gt;
x_i, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pseudocode ===&lt;br /&gt;
The pseudocode of Differential Evolution is shown below:&amp;lt;pre&amp;gt;&lt;br /&gt;
Initialize population P randomly&lt;br /&gt;
While termination criterion not met:&lt;br /&gt;
    For each individual x_i in P:&lt;br /&gt;
        Generate mutant vector v_i&lt;br /&gt;
        Perform crossover to create trial vector u_i&lt;br /&gt;
        If f(u_i) &amp;lt; f(x_i): Replace x_i with u_i&lt;br /&gt;
Return the best solution&lt;br /&gt;
&amp;lt;/pre&amp;gt;The pseudocode presented in this article is adapted from S. Das and P.N. Suganthan&#039;s survey on the state-of-the-art in DE &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Numerical Example ==&lt;br /&gt;
We aim to minimize the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;, a common benchmark for optimization algorithms &amp;lt;ref&amp;gt;J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. The example uses key principles of DE, including mutation and crossover, as described in foundational texts like R. Storn and K. Price’s work &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initial Setup ===&lt;br /&gt;
The initial population and fitness values are: &amp;lt;math&amp;gt;P = \{-7, 3, -2, 6, 8\}, \quad f(P) = \{49, 9, 4, 36, 64\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iteration 1 ===&lt;br /&gt;
&#039;&#039;&#039;Individual 1&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -7&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_1 = 3 + 0.8 \cdot (-2 - 8) = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_1) = 25&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_1) &amp;lt; f(x_1)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;25 &amp;lt; 49&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_1&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 2&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 3&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_2 = -2 + 0.8 \cdot (6 - 8) = -3.6&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_2 = 3&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_2) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_2) = f(x_2)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_2 = 3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 3&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -2&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_3 = 6 + 0.8 \cdot (8 - (-7)) = 18&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_3 = -2&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_3) = 4&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_3) = f(x_3)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_3 = -2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 4&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 6&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;v_4 = -7 + 0.8 \cdot (3 - (-2)) = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_4) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_4) &amp;lt; f(x_4)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;9 &amp;lt; 36&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_4&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_4&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 5&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 8&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_5 = 3 + 0.8 \cdot (-7 - (-2)) = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_5) = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_5) &amp;lt; f(x_5)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;1 &amp;lt; 64&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_5&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After the first iteration, the updated population is: &amp;lt;math&amp;gt;P = \{-5, 3, -2, -3, -1\}, \quad f(P) = \{25, 9, 4, 9, 1\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterations 2–5 ===&lt;br /&gt;
The process is repeated for four more iterations. For each individual, the mutant vector, trial vector, and fitness are calculated. Individuals in the population are replaced whenever &amp;lt;math&amp;gt;f(u_i) &amp;lt; f(x_i)&amp;lt;/math&amp;gt;. Over successive iterations, the population evolves toward the minimum.&lt;br /&gt;
&lt;br /&gt;
=== Final Population ===&lt;br /&gt;
After five iterations, the population converges to: &amp;lt;math&amp;gt;P = \{0, 0, 0, 0, 0\}, \quad f(P) = \{0, 0, 0, 0, 0\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Visualization ==&lt;br /&gt;
The progression of the population during the Differential Evolution process is illustrated in the figures below. Each figure shows the distribution of the population at a specific iteration, demonstrating how the algorithm converges toward the global minimum of the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-001.png|link=File:Ezgif-frame-001.png|center|thumb|600x600px|Figure 1: Initial population distribution before any iterations.]]&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-006.png|link=File:Ezgif-frame-006.png|center|thumb|600x600px|Figure 2: Population after the first iteration, showing improvement toward the global minimum.]]&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-011.png|link=File:Ezgif-frame-011.png|center|thumb|600x600px|Figure 3: Population after the second iteration, with further refinement of solutions.]]&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-018.png|link=File:Ezgif-frame-018.png|center|thumb|600x600px|Figure 4: Population after the fourth iteration, nearing convergence.]]&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-025.png|link=File:Ezgif-frame-025.png|center|thumb|600x600px|Figure 5: Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
Differential Evolution has been applied across various domains, including:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Engineering Design&#039;&#039;&#039;: Applications include minimizing the weight of tension/compression springs, following the approaches outlined by R. Storn and K. Price &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Water Resource Management&#039;&#039;&#039;: Optimization strategies for reservoirs, as seen in recent research &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Machine Learning&#039;&#039;&#039;: Parameter optimization for neural networks &amp;lt;ref&amp;gt;T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Medical Applications&#039;&#039;&#039;: DE’s effectiveness in radiotherapy optimization is discussed in Abbass (2002) &amp;lt;ref&amp;gt;A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
Several software libraries and frameworks have integrated Differential Evolution into their core optimization processes:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ALGLIB&#039;&#039;&#039;: A powerful library in C++, C#, and Java, capable of solving complex constrained and unconstrained optimization problems.&lt;br /&gt;
* &#039;&#039;&#039;DEAP (Distributed Evolutionary Algorithms in Python)&#039;&#039;&#039;: A Python framework for rapid prototyping and testing of evolutionary algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;ECJ&#039;&#039;&#039;: An open-source evolutionary computation system implemented in Java, supporting a variety of algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;MOEA Framework&#039;&#039;&#039;: A Java library dedicated to multi-objective optimization, offering robust DE implementations.&lt;br /&gt;
&lt;br /&gt;
Differential Evolution’s adaptability and effectiveness have made it an indispensable tool for tackling challenging optimization problems across these diverse domains.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
Differential Evolution’s ability to handle a wide range of optimization problems has made it a preferred tool in various domains &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. Despite its strengths, opportunities for improvement exist, including faster convergence and hybrid models &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout its application in engineering design, water resource management, machine learning, and medical fields, DE has demonstrated its effectiveness in solving complex and constrained problems. Its capacity for global convergence and minimal parameter tuning requirements further enhance its appeal for real-world problems.&lt;br /&gt;
&lt;br /&gt;
Despite its strengths, there are areas for future research and improvement:&lt;br /&gt;
&lt;br /&gt;
* Enhancing the convergence speed of the algorithm.&lt;br /&gt;
* Developing hybrid approaches by combining DE with other optimization techniques.&lt;br /&gt;
* Extending its application to emerging domains such as quantum computing and autonomous systems.&lt;br /&gt;
* Improving parameter tuning methods and software implementations to make DE more accessible and efficient for practitioners.&lt;br /&gt;
&lt;br /&gt;
In summary, Differential Evolution continues to be a reliable and powerful tool for solving challenging optimization problems and holds promise for further advancements in both methodology and applications.&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
Below is a list of references used to support the content in this article:&lt;br /&gt;
&lt;br /&gt;
# R.J. Vanderbei, &#039;&#039;Linear Programming: Foundations and Extensions&#039;&#039;. Springer, 2008. Available at: [http://link.springer.com/book/10.1007/978-0-387-74388-2 SpringerLink].&lt;br /&gt;
# S. Boyd, L. Vandenberghe, &#039;&#039;Convex Optimization&#039;&#039;. Cambridge University Press, 2009. Available at: [http://www.stanford.edu/~boyd/cvxbook/ Stanford University].&lt;br /&gt;
# J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999. Available at: [http://link.springer.com/book/10.1007/b98874 SpringerLink].&lt;br /&gt;
# J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011. Available at: [http://link.springer.com/book/10.1007/978-1-4614-0237-4 SpringerLink].&lt;br /&gt;
# A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009. Available at: [http://www2.isye.gatech.edu/~nemirovs/FullBookDec11.pdf Georgia Institute of Technology].&lt;br /&gt;
# L.A. Wolsey, &#039;&#039;Integer Programming&#039;&#039;. Wiley, 1998.&lt;br /&gt;
# L.T. Biegler, I.E. Grossmann, A.W. Westerberg, &#039;&#039;Systematic Methods of Chemical Process Design&#039;&#039;. Prentice Hall Press, 1997.&lt;br /&gt;
# L.T. Biegler, &#039;&#039;Nonlinear Programming: Concepts, Algorithms and Applications to Chemical Engineering&#039;&#039;. SIAM, 2010.&lt;br /&gt;
# T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, &#039;&#039;Optimization of Chemical Processes&#039;&#039;. McGraw-Hill, 2001.&lt;br /&gt;
# T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009. Available at: [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Stanford University].&lt;br /&gt;
# K. Murphy, &#039;&#039;Machine Learning: A Probabilistic Perspective&#039;&#039;. MIT Press, 2012.&lt;br /&gt;
&lt;br /&gt;
Additional references for application examples:&lt;br /&gt;
&lt;br /&gt;
* S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&lt;br /&gt;
* R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&lt;br /&gt;
* A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7708</id>
		<title>Differential evolution</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7708"/>
		<updated>2024-12-16T03:03:08Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: Solved Image error&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Author: Rohit Kumar (rk787) (ChemE 6800 Fall 2024)&lt;br /&gt;
&lt;br /&gt;
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Differential Evolution (DE) is a robust and efficient optimization algorithm widely used for solving non-linear, non-differentiable, and multimodal optimization problems. It was first introduced by R. Storn and K. Price in their seminal work on global optimization over continuous spaces &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;, where they described DE as:-&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous-space functions.&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A heuristic is any approximation method for a solution to a problem capable of producing satisfactory solutions within tolerable amounts of time when classic methods cannot find an exact answer or work quite slowly. Unlike the Gradient Descent algorithm, the optimization problem does not need to be differentiable.&lt;br /&gt;
&lt;br /&gt;
The key idea is to generate a population of candidate solutions through successive generations using vector differences for deviations. This approach allows DE to adapt its search methodology to the objective function, making it particularly effective for non-linear, non-differentiable, and multimodal optimization problems.&lt;br /&gt;
&lt;br /&gt;
The algorithm is renowned for its simplicity and efficiency. Its primary advantages include:&lt;br /&gt;
&lt;br /&gt;
* Requires minimal parameter tuning.&lt;br /&gt;
* High robustness.&lt;br /&gt;
* Ability to avoid local optima and converge toward global solutions.&lt;br /&gt;
&lt;br /&gt;
In DE, the currently evolving population of possible solutions is represented as a vector of real-valued numbers. The algorithm attempts to enhance these solutions through a series of iterations built upon three main steps:&lt;br /&gt;
&lt;br /&gt;
* Mutation&lt;br /&gt;
* Crossover&lt;br /&gt;
* Selection&lt;br /&gt;
&lt;br /&gt;
DE has become highly popular for applications in such fields as engineering design, machine learning, signal processing, and financial modeling. Despite its simplicity, Differential Evolution is competitive against many other optimization techniques, often performing better on challenging problems.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Discussion ==&lt;br /&gt;
Differential Evolution operates through three main steps: initialization, mutation, crossover, and selection, as outlined in optimization literature &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initialization ===&lt;br /&gt;
The initial population is generated randomly within the bounds of the search space: &amp;lt;math&amp;gt;x_{i,j} \sim U(x_j^{\text{min}}, x_j^{\text{max}})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{i,j}&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j^{\text{th}}&amp;lt;/math&amp;gt; parameter of the &amp;lt;math&amp;gt;i^{\text{th}}&amp;lt;/math&amp;gt; individual.&lt;br /&gt;
* &amp;lt;math&amp;gt;U(a, b)&amp;lt;/math&amp;gt; is a uniform random distribution.&lt;br /&gt;
&lt;br /&gt;
=== Mutation ===&lt;br /&gt;
A mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; is generated by combining three randomly chosen vectors: &amp;lt;math&amp;gt;v_i = x_{r1} + F \cdot (x_{r2} - x_{r3})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is the mutation factor.&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{r1}, x_{r2}, x_{r3}&amp;lt;/math&amp;gt; are distinct vectors from the population.&lt;br /&gt;
&lt;br /&gt;
=== Crossover ===&lt;br /&gt;
A trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; is created by mixing the mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; and the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;&lt;br /&gt;
u_{i,j} =&lt;br /&gt;
\begin{cases} &lt;br /&gt;
v_{i,j}, &amp;amp; \text{if } \text{rand}(0, 1) \leq CR \text{ or } j = j_{\text{rand}}, \\&lt;br /&gt;
x_{i,j}, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;CR&amp;lt;/math&amp;gt; is the crossover probability.&lt;br /&gt;
* &amp;lt;math&amp;gt;j_{\text{rand}}&amp;lt;/math&amp;gt; is a randomly chosen index.&lt;br /&gt;
&lt;br /&gt;
=== Selection ===&lt;br /&gt;
The trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; replaces the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; in the population if it yields a better objective function value: &amp;lt;math&amp;gt;&lt;br /&gt;
x_i =&lt;br /&gt;
\begin{cases}&lt;br /&gt;
u_i, &amp;amp; \text{if } f(u_i) \leq f(x_i), \\&lt;br /&gt;
x_i, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pseudocode ===&lt;br /&gt;
The pseudocode of Differential Evolution is shown below:&amp;lt;pre&amp;gt;&lt;br /&gt;
Initialize population P randomly&lt;br /&gt;
While termination criterion not met:&lt;br /&gt;
    For each individual x_i in P:&lt;br /&gt;
        Generate mutant vector v_i&lt;br /&gt;
        Perform crossover to create trial vector u_i&lt;br /&gt;
        If f(u_i) &amp;lt; f(x_i): Replace x_i with u_i&lt;br /&gt;
Return the best solution&lt;br /&gt;
&amp;lt;/pre&amp;gt;The pseudocode presented in this article is adapted from S. Das and P.N. Suganthan&#039;s survey on the state-of-the-art in DE &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Numerical Example ==&lt;br /&gt;
We aim to minimize the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;, a common benchmark for optimization algorithms &amp;lt;ref&amp;gt;J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. The example uses key principles of DE, including mutation and crossover, as described in foundational texts like R. Storn and K. Price’s work &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initial Setup ===&lt;br /&gt;
The initial population and fitness values are: &amp;lt;math&amp;gt;P = \{-7, 3, -2, 6, 8\}, \quad f(P) = \{49, 9, 4, 36, 64\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iteration 1 ===&lt;br /&gt;
&#039;&#039;&#039;Individual 1&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -7&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_1 = 3 + 0.8 \cdot (-2 - 8) = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_1) = 25&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_1) &amp;lt; f(x_1)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;25 &amp;lt; 49&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_1&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 2&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 3&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_2 = -2 + 0.8 \cdot (6 - 8) = -3.6&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_2 = 3&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_2) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_2) = f(x_2)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_2 = 3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 3&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -2&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_3 = 6 + 0.8 \cdot (8 - (-7)) = 18&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_3 = -2&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_3) = 4&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_3) = f(x_3)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_3 = -2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 4&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 6&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;v_4 = -7 + 0.8 \cdot (3 - (-2)) = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_4) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_4) &amp;lt; f(x_4)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;9 &amp;lt; 36&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_4&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_4&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 5&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 8&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_5 = 3 + 0.8 \cdot (-7 - (-2)) = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_5) = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_5) &amp;lt; f(x_5)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;1 &amp;lt; 64&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_5&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After the first iteration, the updated population is: &amp;lt;math&amp;gt;P = \{-5, 3, -2, -3, -1\}, \quad f(P) = \{25, 9, 4, 9, 1\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterations 2–5 ===&lt;br /&gt;
The process is repeated for four more iterations. For each individual, the mutant vector, trial vector, and fitness are calculated. Individuals in the population are replaced whenever &amp;lt;math&amp;gt;f(u_i) &amp;lt; f(x_i)&amp;lt;/math&amp;gt;. Over successive iterations, the population evolves toward the minimum.&lt;br /&gt;
&lt;br /&gt;
=== Final Population ===&lt;br /&gt;
After five iterations, the population converges to: &amp;lt;math&amp;gt;P = \{0, 0, 0, 0, 0\}, \quad f(P) = \{0, 0, 0, 0, 0\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Visualization ==&lt;br /&gt;
The progression of the population during the Differential Evolution process is illustrated in the figures below. Each figure shows the distribution of the population at a specific iteration, demonstrating how the algorithm converges toward the global minimum of the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 1&#039;&#039;&#039;: Initial population distribution.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-001.png|link=File:Ezgif-frame-001.png|center|600x600px|Initial population distribution before any iterations.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 2&#039;&#039;&#039;: Population after Iteration 1.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-006.png|link=File:Ezgif-frame-006.png|center|600x600px|Population after the first iteration, showing improvement toward the global minimum.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 3&#039;&#039;&#039;: Population after Iteration 2.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-011.png|link=File:Ezgif-frame-011.png|center|600x600px|Population after the second iteration, with further refinement of solutions.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 4&#039;&#039;&#039;: Population after Iteration 4.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-018.png|link=File:Ezgif-frame-018.png|center|600x600px|Population after the fourth iteration, nearing convergence.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 5&#039;&#039;&#039;: Final population after Iteration 5.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-025.png|link=File:Ezgif-frame-025.png|alt=Final population converging to the global minimum at &lt;br /&gt;
  &lt;br /&gt;
    &lt;br /&gt;
      &lt;br /&gt;
x&lt;br /&gt;
&amp;lt;nowiki/&amp;gt;=&lt;br /&gt;
0&lt;br /&gt;
      &lt;br /&gt;
    &lt;br /&gt;
{\displaystyle x=0}&lt;br /&gt;
  &lt;br /&gt;
.|center|600x600px|Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
Differential Evolution has been applied across various domains, including:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Engineering Design&#039;&#039;&#039;: Applications include minimizing the weight of tension/compression springs, following the approaches outlined by R. Storn and K. Price &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Water Resource Management&#039;&#039;&#039;: Optimization strategies for reservoirs, as seen in recent research &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Machine Learning&#039;&#039;&#039;: Parameter optimization for neural networks &amp;lt;ref&amp;gt;T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Medical Applications&#039;&#039;&#039;: DE’s effectiveness in radiotherapy optimization is discussed in Abbass (2002) &amp;lt;ref&amp;gt;A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
Several software libraries and frameworks have integrated Differential Evolution into their core optimization processes:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ALGLIB&#039;&#039;&#039;: A powerful library in C++, C#, and Java, capable of solving complex constrained and unconstrained optimization problems.&lt;br /&gt;
* &#039;&#039;&#039;DEAP (Distributed Evolutionary Algorithms in Python)&#039;&#039;&#039;: A Python framework for rapid prototyping and testing of evolutionary algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;ECJ&#039;&#039;&#039;: An open-source evolutionary computation system implemented in Java, supporting a variety of algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;MOEA Framework&#039;&#039;&#039;: A Java library dedicated to multi-objective optimization, offering robust DE implementations.&lt;br /&gt;
&lt;br /&gt;
Differential Evolution’s adaptability and effectiveness have made it an indispensable tool for tackling challenging optimization problems across these diverse domains.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
Differential Evolution’s ability to handle a wide range of optimization problems has made it a preferred tool in various domains &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. Despite its strengths, opportunities for improvement exist, including faster convergence and hybrid models &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout its application in engineering design, water resource management, machine learning, and medical fields, DE has demonstrated its effectiveness in solving complex and constrained problems. Its capacity for global convergence and minimal parameter tuning requirements further enhance its appeal for real-world problems.&lt;br /&gt;
&lt;br /&gt;
Despite its strengths, there are areas for future research and improvement:&lt;br /&gt;
&lt;br /&gt;
* Enhancing the convergence speed of the algorithm.&lt;br /&gt;
* Developing hybrid approaches by combining DE with other optimization techniques.&lt;br /&gt;
* Extending its application to emerging domains such as quantum computing and autonomous systems.&lt;br /&gt;
* Improving parameter tuning methods and software implementations to make DE more accessible and efficient for practitioners.&lt;br /&gt;
&lt;br /&gt;
In summary, Differential Evolution continues to be a reliable and powerful tool for solving challenging optimization problems and holds promise for further advancements in both methodology and applications.&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
Below is a list of references used to support the content in this article:&lt;br /&gt;
&lt;br /&gt;
# R.J. Vanderbei, &#039;&#039;Linear Programming: Foundations and Extensions&#039;&#039;. Springer, 2008. Available at: [http://link.springer.com/book/10.1007/978-0-387-74388-2 SpringerLink].&lt;br /&gt;
# S. Boyd, L. Vandenberghe, &#039;&#039;Convex Optimization&#039;&#039;. Cambridge University Press, 2009. Available at: [http://www.stanford.edu/~boyd/cvxbook/ Stanford University].&lt;br /&gt;
# J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999. Available at: [http://link.springer.com/book/10.1007/b98874 SpringerLink].&lt;br /&gt;
# J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011. Available at: [http://link.springer.com/book/10.1007/978-1-4614-0237-4 SpringerLink].&lt;br /&gt;
# A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009. Available at: [http://www2.isye.gatech.edu/~nemirovs/FullBookDec11.pdf Georgia Institute of Technology].&lt;br /&gt;
# L.A. Wolsey, &#039;&#039;Integer Programming&#039;&#039;. Wiley, 1998.&lt;br /&gt;
# L.T. Biegler, I.E. Grossmann, A.W. Westerberg, &#039;&#039;Systematic Methods of Chemical Process Design&#039;&#039;. Prentice Hall Press, 1997.&lt;br /&gt;
# L.T. Biegler, &#039;&#039;Nonlinear Programming: Concepts, Algorithms and Applications to Chemical Engineering&#039;&#039;. SIAM, 2010.&lt;br /&gt;
# T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, &#039;&#039;Optimization of Chemical Processes&#039;&#039;. McGraw-Hill, 2001.&lt;br /&gt;
# T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009. Available at: [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Stanford University].&lt;br /&gt;
# K. Murphy, &#039;&#039;Machine Learning: A Probabilistic Perspective&#039;&#039;. MIT Press, 2012.&lt;br /&gt;
&lt;br /&gt;
Additional references for application examples:&lt;br /&gt;
&lt;br /&gt;
* S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&lt;br /&gt;
* R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&lt;br /&gt;
* A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7707</id>
		<title>Differential evolution</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=Differential_evolution&amp;diff=7707"/>
		<updated>2024-12-16T02:59:24Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: Updated Differential Evolution&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Author: Rohit Kumar (rk787) (ChemE 6800 Fall 2024)&lt;br /&gt;
&lt;br /&gt;
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Differential Evolution (DE) is a robust and efficient optimization algorithm widely used for solving non-linear, non-differentiable, and multimodal optimization problems. It was first introduced by R. Storn and K. Price in their seminal work on global optimization over continuous spaces &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;, where they described DE as:-&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous-space functions.&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A heuristic is any approximation method for a solution to a problem capable of producing satisfactory solutions within tolerable amounts of time when classic methods cannot find an exact answer or work quite slowly. Unlike the Gradient Descent algorithm, the optimization problem does not need to be differentiable.&lt;br /&gt;
&lt;br /&gt;
The key idea is to generate a population of candidate solutions through successive generations using vector differences for deviations. This approach allows DE to adapt its search methodology to the objective function, making it particularly effective for non-linear, non-differentiable, and multimodal optimization problems.&lt;br /&gt;
&lt;br /&gt;
The algorithm is renowned for its simplicity and efficiency. Its primary advantages include:&lt;br /&gt;
&lt;br /&gt;
* Requires minimal parameter tuning.&lt;br /&gt;
* High robustness.&lt;br /&gt;
* Ability to avoid local optima and converge toward global solutions.&lt;br /&gt;
&lt;br /&gt;
In DE, the currently evolving population of possible solutions is represented as a vector of real-valued numbers. The algorithm attempts to enhance these solutions through a series of iterations built upon three main steps:&lt;br /&gt;
&lt;br /&gt;
* Mutation&lt;br /&gt;
* Crossover&lt;br /&gt;
* Selection&lt;br /&gt;
&lt;br /&gt;
DE has become highly popular for applications in such fields as engineering design, machine learning, signal processing, and financial modeling. Despite its simplicity, Differential Evolution is competitive against many other optimization techniques, often performing better on challenging problems.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Discussion ==&lt;br /&gt;
Differential Evolution operates through three main steps: initialization, mutation, crossover, and selection, as outlined in optimization literature &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initialization ===&lt;br /&gt;
The initial population is generated randomly within the bounds of the search space: &amp;lt;math&amp;gt;x_{i,j} \sim U(x_j^{\text{min}}, x_j^{\text{max}})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{i,j}&amp;lt;/math&amp;gt; represents the &amp;lt;math&amp;gt;j^{\text{th}}&amp;lt;/math&amp;gt; parameter of the &amp;lt;math&amp;gt;i^{\text{th}}&amp;lt;/math&amp;gt; individual.&lt;br /&gt;
* &amp;lt;math&amp;gt;U(a, b)&amp;lt;/math&amp;gt; is a uniform random distribution.&lt;br /&gt;
&lt;br /&gt;
=== Mutation ===&lt;br /&gt;
A mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; is generated by combining three randomly chosen vectors: &amp;lt;math&amp;gt;v_i = x_{r1} + F \cdot (x_{r2} - x_{r3})&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is the mutation factor.&lt;br /&gt;
* &amp;lt;math&amp;gt;x_{r1}, x_{r2}, x_{r3}&amp;lt;/math&amp;gt; are distinct vectors from the population.&lt;br /&gt;
&lt;br /&gt;
=== Crossover ===&lt;br /&gt;
A trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; is created by mixing the mutant vector &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; and the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;&lt;br /&gt;
u_{i,j} =&lt;br /&gt;
\begin{cases} &lt;br /&gt;
v_{i,j}, &amp;amp; \text{if } \text{rand}(0, 1) \leq CR \text{ or } j = j_{\text{rand}}, \\&lt;br /&gt;
x_{i,j}, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;CR&amp;lt;/math&amp;gt; is the crossover probability.&lt;br /&gt;
* &amp;lt;math&amp;gt;j_{\text{rand}}&amp;lt;/math&amp;gt; is a randomly chosen index.&lt;br /&gt;
&lt;br /&gt;
=== Selection ===&lt;br /&gt;
The trial vector &amp;lt;math&amp;gt;u_i&amp;lt;/math&amp;gt; replaces the target vector &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; in the population if it yields a better objective function value: &amp;lt;math&amp;gt;&lt;br /&gt;
x_i =&lt;br /&gt;
\begin{cases}&lt;br /&gt;
u_i, &amp;amp; \text{if } f(u_i) \leq f(x_i), \\&lt;br /&gt;
x_i, &amp;amp; \text{otherwise}.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pseudocode ===&lt;br /&gt;
The pseudocode of Differential Evolution is shown below:&amp;lt;pre&amp;gt;&lt;br /&gt;
Initialize population P randomly&lt;br /&gt;
While termination criterion not met:&lt;br /&gt;
    For each individual x_i in P:&lt;br /&gt;
        Generate mutant vector v_i&lt;br /&gt;
        Perform crossover to create trial vector u_i&lt;br /&gt;
        If f(u_i) &amp;lt; f(x_i): Replace x_i with u_i&lt;br /&gt;
Return the best solution&lt;br /&gt;
&amp;lt;/pre&amp;gt;The pseudocode presented in this article is adapted from S. Das and P.N. Suganthan&#039;s survey on the state-of-the-art in DE &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Numerical Example ==&lt;br /&gt;
We aim to minimize the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;, a common benchmark for optimization algorithms &amp;lt;ref&amp;gt;J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. The example uses key principles of DE, including mutation and crossover, as described in foundational texts like R. Storn and K. Price’s work &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Initial Setup ===&lt;br /&gt;
The initial population and fitness values are: &amp;lt;math&amp;gt;P = \{-7, 3, -2, 6, 8\}, \quad f(P) = \{49, 9, 4, 36, 64\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iteration 1 ===&lt;br /&gt;
&#039;&#039;&#039;Individual 1&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -7&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_1 = 3 + 0.8 \cdot (-2 - 8) = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_1) = 25&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_1) &amp;lt; f(x_1)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;25 &amp;lt; 49&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_1&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_1 = -5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 2&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 3&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_2 = -2 + 0.8 \cdot (6 - 8) = -3.6&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_2 = 3&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_2) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_2) = f(x_2)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_2 = 3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 3&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = -2&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_3 = 6 + 0.8 \cdot (8 - (-7)) = 18&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_3 = -2&amp;lt;/math&amp;gt; (crossover fails)&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_3) = 4&amp;lt;/math&amp;gt;&lt;br /&gt;
* No replacement as &amp;lt;math&amp;gt;f(u_3) = f(x_3)&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_3 = -2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 4&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 6&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;v_4 = -7 + 0.8 \cdot (3 - (-2)) = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_4) = 9&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_4) &amp;lt; f(x_4)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;9 &amp;lt; 36&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_4&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_4&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_4 = -3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Individual 5&#039;&#039;&#039; (&amp;lt;math&amp;gt;x = 8&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
* Mutant vector: &amp;lt;math&amp;gt;v_5 = 3 + 0.8 \cdot (-7 - (-2)) = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Trial vector: &amp;lt;math&amp;gt;u_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Fitness: &amp;lt;math&amp;gt;f(u_5) = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
* Since &amp;lt;math&amp;gt;f(u_5) &amp;lt; f(x_5)&amp;lt;/math&amp;gt; (&amp;lt;math&amp;gt;1 &amp;lt; 64&amp;lt;/math&amp;gt;), replace &amp;lt;math&amp;gt;x_5&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;u_5&amp;lt;/math&amp;gt;&lt;br /&gt;
* Updated value: &amp;lt;math&amp;gt;x_5 = -1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After the first iteration, the updated population is: &amp;lt;math&amp;gt;P = \{-5, 3, -2, -3, -1\}, \quad f(P) = \{25, 9, 4, 9, 1\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterations 2–5 ===&lt;br /&gt;
The process is repeated for four more iterations. For each individual, the mutant vector, trial vector, and fitness are calculated. Individuals in the population are replaced whenever &amp;lt;math&amp;gt;f(u_i) &amp;lt; f(x_i)&amp;lt;/math&amp;gt;. Over successive iterations, the population evolves toward the minimum.&lt;br /&gt;
&lt;br /&gt;
=== Final Population ===&lt;br /&gt;
After five iterations, the population converges to: &amp;lt;math&amp;gt;P = \{0, 0, 0, 0, 0\}, \quad f(P) = \{0, 0, 0, 0, 0\}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Visualization ===&lt;br /&gt;
The progression of the population during the Differential Evolution process is illustrated in the figures below. Each figure shows the distribution of the population at a specific iteration, demonstrating how the algorithm converges toward the global minimum of the function &amp;lt;math&amp;gt;f(x) = x^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Figure 1&#039;&#039;&#039;: Initial population distribution.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-001.png|link=File:Ezgif-frame-001.png|center|600x600px|Initial population distribution before any iterations.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 2&#039;&#039;&#039;: Population after Iteration 1.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-006.png|link=File:Ezgif-frame-006.png|center|600x600px|Population after the first iteration, showing improvement toward the global minimum.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 3&#039;&#039;&#039;: Population after Iteration 2.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-011.png|link=File:Ezgif-frame-011.png|center|600x600px|Population after the second iteration, with further refinement of solutions.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 4&#039;&#039;&#039;: Population after Iteration 4.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-018.png|link=File:Ezgif-frame-018.png|center|600x600px|Population after the fourth iteration, nearing convergence.]]&lt;br /&gt;
&#039;&#039;&#039;Figure 5&#039;&#039;&#039;: Final population after Iteration 5.&lt;br /&gt;
[[index.php?title=File:Ezgif-frame-025.png|link=File:Ezgif-frame-025.png|alt=Final population converging to the global minimum at &lt;br /&gt;
  &lt;br /&gt;
    &lt;br /&gt;
      &lt;br /&gt;
x&lt;br /&gt;
&amp;lt;nowiki/&amp;gt;=&lt;br /&gt;
0&lt;br /&gt;
      &lt;br /&gt;
    &lt;br /&gt;
{\displaystyle x=0}&lt;br /&gt;
  &lt;br /&gt;
.|center|600x600px|Final population converging to the global minimum at &amp;lt;math&amp;gt;x = 0&amp;lt;/math&amp;gt;.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
Differential Evolution has been applied across various domains, including:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Engineering Design&#039;&#039;&#039;: Applications include minimizing the weight of tension/compression springs, following the approaches outlined by R. Storn and K. Price &amp;lt;ref&amp;gt;R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Water Resource Management&#039;&#039;&#039;: Optimization strategies for reservoirs, as seen in recent research &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Machine Learning&#039;&#039;&#039;: Parameter optimization for neural networks &amp;lt;ref&amp;gt;T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Medical Applications&#039;&#039;&#039;: DE’s effectiveness in radiotherapy optimization is discussed in Abbass (2002) &amp;lt;ref&amp;gt;A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
Several software libraries and frameworks have integrated Differential Evolution into their core optimization processes:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;ALGLIB&#039;&#039;&#039;: A powerful library in C++, C#, and Java, capable of solving complex constrained and unconstrained optimization problems.&lt;br /&gt;
* &#039;&#039;&#039;DEAP (Distributed Evolutionary Algorithms in Python)&#039;&#039;&#039;: A Python framework for rapid prototyping and testing of evolutionary algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;ECJ&#039;&#039;&#039;: An open-source evolutionary computation system implemented in Java, supporting a variety of algorithms, including DE.&lt;br /&gt;
* &#039;&#039;&#039;MOEA Framework&#039;&#039;&#039;: A Java library dedicated to multi-objective optimization, offering robust DE implementations.&lt;br /&gt;
&lt;br /&gt;
Differential Evolution’s adaptability and effectiveness have made it an indispensable tool for tackling challenging optimization problems across these diverse domains.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
Differential Evolution’s ability to handle a wide range of optimization problems has made it a preferred tool in various domains &amp;lt;ref&amp;gt;J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009.&amp;lt;/ref&amp;gt;. Despite its strengths, opportunities for improvement exist, including faster convergence and hybrid models &amp;lt;ref&amp;gt;S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout its application in engineering design, water resource management, machine learning, and medical fields, DE has demonstrated its effectiveness in solving complex and constrained problems. Its capacity for global convergence and minimal parameter tuning requirements further enhance its appeal for real-world problems.&lt;br /&gt;
&lt;br /&gt;
Despite its strengths, there are areas for future research and improvement:&lt;br /&gt;
&lt;br /&gt;
* Enhancing the convergence speed of the algorithm.&lt;br /&gt;
* Developing hybrid approaches by combining DE with other optimization techniques.&lt;br /&gt;
* Extending its application to emerging domains such as quantum computing and autonomous systems.&lt;br /&gt;
* Improving parameter tuning methods and software implementations to make DE more accessible and efficient for practitioners.&lt;br /&gt;
&lt;br /&gt;
In summary, Differential Evolution continues to be a reliable and powerful tool for solving challenging optimization problems and holds promise for further advancements in both methodology and applications.&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
Below is a list of references used to support the content in this article:&lt;br /&gt;
&lt;br /&gt;
# R.J. Vanderbei, &#039;&#039;Linear Programming: Foundations and Extensions&#039;&#039;. Springer, 2008. Available at: [http://link.springer.com/book/10.1007/978-0-387-74388-2 SpringerLink].&lt;br /&gt;
# S. Boyd, L. Vandenberghe, &#039;&#039;Convex Optimization&#039;&#039;. Cambridge University Press, 2009. Available at: [http://www.stanford.edu/~boyd/cvxbook/ Stanford University].&lt;br /&gt;
# J. Nocedal, S.J. Wright, &#039;&#039;Numerical Optimization&#039;&#039;. Springer, 1999. Available at: [http://link.springer.com/book/10.1007/b98874 SpringerLink].&lt;br /&gt;
# J. Birge, F. Louveaux, &#039;&#039;Introduction to Stochastic Programming&#039;&#039;. Springer, 2011. Available at: [http://link.springer.com/book/10.1007/978-1-4614-0237-4 SpringerLink].&lt;br /&gt;
# A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, &#039;&#039;Robust Optimization&#039;&#039;. Princeton University Press, 2009. Available at: [http://www2.isye.gatech.edu/~nemirovs/FullBookDec11.pdf Georgia Institute of Technology].&lt;br /&gt;
# L.A. Wolsey, &#039;&#039;Integer Programming&#039;&#039;. Wiley, 1998.&lt;br /&gt;
# L.T. Biegler, I.E. Grossmann, A.W. Westerberg, &#039;&#039;Systematic Methods of Chemical Process Design&#039;&#039;. Prentice Hall Press, 1997.&lt;br /&gt;
# L.T. Biegler, &#039;&#039;Nonlinear Programming: Concepts, Algorithms and Applications to Chemical Engineering&#039;&#039;. SIAM, 2010.&lt;br /&gt;
# T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, &#039;&#039;Optimization of Chemical Processes&#039;&#039;. McGraw-Hill, 2001.&lt;br /&gt;
# T. Hastie, R. Tibshirani, J. Friedman, &#039;&#039;The Elements of Statistical Learning&#039;&#039;. Springer, 2009. Available at: [http://statweb.stanford.edu/~tibs/ElemStatLearn/ Stanford University].&lt;br /&gt;
# K. Murphy, &#039;&#039;Machine Learning: A Probabilistic Perspective&#039;&#039;. MIT Press, 2012.&lt;br /&gt;
&lt;br /&gt;
Additional references for application examples:&lt;br /&gt;
&lt;br /&gt;
* S. Das, P.N. Suganthan, &amp;quot;Differential Evolution: A Survey of the State-of-the-Art,&amp;quot; &#039;&#039;IEEE Transactions on Evolutionary Computation&#039;&#039;, vol. 15, no. 1, pp. 4-31, 2011.&lt;br /&gt;
* R. Storn, K. Price, &amp;quot;Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,&amp;quot; &#039;&#039;Journal of Global Optimization&#039;&#039;, vol. 11, pp. 341-359, 1997.&lt;br /&gt;
* A. Abbass, &amp;quot;An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis,&amp;quot; &#039;&#039;Artificial Intelligence in Medicine&#039;&#039;, vol. 25, no. 3, pp. 265-281, 2002.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-025.png&amp;diff=7701</id>
		<title>File:Ezgif-frame-025.png</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-025.png&amp;diff=7701"/>
		<updated>2024-12-16T02:32:07Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 5: Iteration 5: Population converged at the global minimum.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-018.png&amp;diff=7700</id>
		<title>File:Ezgif-frame-018.png</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-018.png&amp;diff=7700"/>
		<updated>2024-12-16T02:31:43Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 4: Iteration 4: Approaching the global minimum.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-011.png&amp;diff=7699</id>
		<title>File:Ezgif-frame-011.png</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-011.png&amp;diff=7699"/>
		<updated>2024-12-16T02:31:15Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 3: Iteration 3: Further refinement of population.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-006.png&amp;diff=7698</id>
		<title>File:Ezgif-frame-006.png</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-006.png&amp;diff=7698"/>
		<updated>2024-12-16T02:30:35Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 2: Iteration 2: Population evolving toward the minimum.&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
	<entry>
		<id>https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-001.png&amp;diff=7689</id>
		<title>File:Ezgif-frame-001.png</title>
		<link rel="alternate" type="text/html" href="https://optimization.cbe.cornell.edu/index.php?title=File:Ezgif-frame-001.png&amp;diff=7689"/>
		<updated>2024-12-16T02:16:25Z</updated>

		<summary type="html">&lt;p&gt;Fall2024 Wiki Team26: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Iteration 1&lt;/div&gt;</summary>
		<author><name>Fall2024 Wiki Team26</name></author>
	</entry>
</feed>