Benchmark Tests for Evolutionary Algorithms

If you are a researcher, Professional developer or a student developing Optimization or search algorithms, first thing crosses  your mind is how well my algorithm is going to work? How it compares against standard search and optimization algorithms? Are there any standard tests available where I can compare and test my algorithm on the scale of speed and performance ? and last but certainly not the least if you are developing some standard algorithms or its variant such as Genetic Algorithms, Particle Swarm Optimization or even a simple brute force search and you want to know how you have coded it ? and whether its going to converge or not ? This post is just for you.

If you are looking for an answer of any of the questions above, well the information I am going to share might be useful for you.

COCO or Comparing Continuous Optimizers is a standard platform for comparison of   global optimizers. This platform is being used for testing optimizer performance in BBOB (Black Box Optimization Benchmarking) workshops. So if you want to test your algorithm in a scientific and rigorous method this benchmarking tool can help you a lot.

COCO provides:

    1. Interface available in Matlab/Octave, C, Java and Python, which allows to run and log experiments on multiple test functions
    2. Standard noisy and noiseless functions to test your algorithms with complete description
    3. a Python tool for generating figures and tables.

So if you program in one of the above languages this tool is perfect for you, and if you use language or platform other than this such as C# or VB you can at-least use well documented standard Noisy and Noiseless functions in your program.

There are numerous noisy and noiseless functions to test such as sphere function, ellipsoidal function, Rastrigin Function and many more. Each Function has their global optimum [-5,5] (between -5 and 5) in each dimension. Normally you have to test your search or optimization algorithm against these functions with various search space dimensions  [2 3 5 10 20 40]. In each dimension several test runs are conducted and convergence effectiveness and time taken to reach optimum  is evaluated.

Once you have conducted your test runs, you can compare results with various similar or other optimization algorithms published in Black Box Optimization Benchmarking workshops (BBOB)  in 2009-2013.

Now Some Important Links:

COCO Site: http://coco.gforge.inria.fr/doku.php

Standard Noiseless Function Documentation:                                       http://coco.lri.fr/downloads/download13.09/bbobdocfunctions.pdf

Standard Noisy Function Documentation:                                    http://coco.lri.fr/downloads/download13.09/bbobdocnoisyfunctions.pdf

Documentation for Experimental Test Setup:                              http://coco.lri.fr/downloads/download13.09/bbobdocexperiment.pdf

Result page for BBOB-2012:  http://coco.gforge.inria.fr/doku.php?id=bbob-2012-results

Advertisements

One thought on “Benchmark Tests for Evolutionary Algorithms

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s