Finally A good news in the field of convergence analysis of Evolutionary Algorithms. As we all know till now no concrete Mathematical convergence proof of any evolutionary algorithm, be it Genetic Algorithms, Particle Swarm Optimization or Ant Colony Optimization was available in the literature. Although all these search and optimization algorithms have proven their effectiveness in solving real world, ill conditioned, discontinuous and multi-modal problems where traditional optimization techniques(read gradient based search) have failed, critics always criticize Evolutionary Algorithms due to the lack of convergence proof and parameter tuning issues. Schema Theorem by John Holland which is the basic building block of Genetic Algorithms and some work done in the field of global convergence of swarm based optimization techniques attempt to present Evolutionary algorithms in good light but the debate seems to be never ending.

A recent paper on first mathematical convergence proof of “Ant Colony optimization” has further cemented the power and effectiveness of Evolutionary Algorithms. The paper “*Ants find the shortest path: a mathematical proof” *by Jayadeva et al, proposes new Ant Colony Optimization algorithm named “*eigen ants*” for finding the shortest path between a source and a destination, based on selective pheromone removal that occurs only on the path that is actually chosen for each trip. It proves that the shortest path is the only stable equilibrium for Eigen-Ant, which means that it is maintained for arbitrary initial pheromone concentrations on paths, and even when path lengths change with time. The Eigen-Ant algorithm uses only two parameters and does not require them to be finely tuned.(*part of abstract*).

Finally a good news to cherish…………………

### Like this:

Like Loading...

*Related*

Hmmm. One question that needs to be answered is what information about the problem is this (or any other) technique able to access that others can’t. Another is what is the practical trade-off represented by a particular choice of algorithm – for example, is it speed against the reliability of finding the truly optimal solution (GAs work well initially but slow down dramatically in the later stages of certain classes of optimisation problems – specifically, those for which the schema theorem doesn’t apply). Techniques designed to exploit properties of the problem at hand have a better chance of out-performing general purpose methods. So if the goal is to avoid having to understand the problem, evolutionary algorithms might be a good choice, but could in principle be out-performed by a method based on deeper understanding of the problem.

Lastly, a (conservative) convergence proof exists for simulated annealing, which is both easy to programme and effective, and can be modified to incorporate problem specific information where it is available in relatively straightforward ways. We are way beyond regarding the best available traditional optimization to be gradient based search, and what I want is to be able to use knowledge of the problem either directly in the formulation of the solution method or indirectly in choosing an appropriate pre-existing technique; at the very least I would need a good reason not to use SA!

Hi Phil,

First of all thanks for an elaborated comment on post. Yes in a way you are right but you are viewing it in a different perspective. As you may be aware there is a “No free Lunch Theorem” that describes that over a large variety of cost functions the performance of each algorithm more or less remains same. for example Simulated Annealing may produce better results when better solutions (not best) are required in a comparatively low time but in problems where many local optimum exists in a large solution space and solution space has plateau like geometry other algorithms score. also speed of the algorithm depends upon whether it can be directly implemented on hardware or not. or whether it can be implemented in parallel architecture. So all boils down to specific problem you are dealing with. One good comparison example you can find here:

http://www.staff.ncl.ac.uk/damian.giaouris/pdf/Papers/PID886208.pdf

The basic intent of this post is not to recommend any specific algorithm but to share a news that first concrete mathematical convergence evidence of one class (ant colony) of evolutionary algorithm is now available that will further cement the effectiveness of heuristic search in general.

There have been proofs of convergence for stochastic algorithms for years my friend.

For SA, there is a proof (as stated above) by Hajek in the mid-80.

For genetic algorithms the specifc proof was given by Raphael Cerf in his PhD published in 1994. The proof is quite complex and uses elements of the perturbation theory of Freidlin-Wentzell. The PhD is in french with parts in english. The paper “Asymptotic convergence of genetic algorithms” appeared in Adv. in Appl. Probab. Volume 30, Number 2 (1998), 521-550. Cerf received the Rollo Davidson Prize in 1999, and the EMS Prize in 2000.

Proofs of convergence for stochastic algorithms are (almost always) asymptotic and exhibit (almost always) very little practical interest.

Discussing what is the best stochastic algorithm is a common troll, even if there are elements given by the results of some contests regularly hold at some conferences (PPSN, maybe also ECAI or GECCO, I don’t remember exactly).

Currently, as far as I can remember the last results I reviewed, CMA-ES (mainly designed by Nikolaus Hansen) is one of the best, il not the best, for quite a large class of problems.

Hi James…. First of all I am not saying that no work has been done in past in the direction of proof of convergence of evolutionary / stochastic search and optimization methods. Neither I am trying to prove that one algorithm is superior to another. well it is well understood till now that there is as such no universal search and optimization algorithm that can outperform others in all type of problems. First of all as you were saying That SA there is a proof of convergence. as per my knowings there are two approaches for that. Homogeneous and non-homogeneous. the non-homogeneous approach that is considered more powerful than the homogeneous one is based on Markov chain theory and provides necessary and sufficient conditions of convergence. but this theory is based on certain assumptions and the most important assumption that is very difficult to meet in real world problem is “for every local optimal point stationary prob distribution should lead to zero” without this SA algorithm may or may not lead to global optimum solution with probability one.

However the above theorems are excellent milestones in evolutionary computation and provide good insight on fundamental nature of such algorithms. furthermore when we add the term “asymptotic” we somehow lose the feasibility of its practical implementations. same theory applies to genetic algorithms and ACO. In-fact when we look into the literature of Evolutionary algorithms on theoretical grounds we see that due to probabilistic and stochastic nature of the algorithms it is nearly impossible to device a universal algorithm that can work well on all type of problems (no-free lunch theorem).

As the field of evolutionary algorithm is continuously “evolving” and plethora of new algorithms and strategies are hitting the literature , competing in bench marking and performance tests, I think that this is the ripe time to understand fundamental nature of Evolutionary algorithms.