Evolution Strategies (ES)

class pypop7.optimizers.es.es.ES(problem, options)[source]

Evolution Strategies (ES).

This is the abstract class for all ES classes. Please use any of its instantiated subclasses to optimize the black-box problem at hand.

Note

ES are a well-established family of randomized population-based search algorithms, proposed by two German computer scientists Ingo Rechenberg and Hans-Paul Schwefel (two recipients of IEEE Evolutionary Computation Pioneer Award 2002). One key property of ES is adaptability of strategy parameters, which can significantly accelerate the (local) convergence rate in many cases. Recently, the theoretical foundation of its most representative (modern) version CMA-ES has been in part built on the Information-Geometric Optimization (IGO) framework via invariance principles (inspired by NES).

“Their two most prominent design principles are unbiasedness and adaptive control of parameters of the sample distribution. Parameter control is not always directly inspired by biological evolution, but is an indispensable and central feature of ES. The result of selection and recombination is often deterministic. For recombination, using a single parental centroid has become the most popular approach, because such algorithms are simpler to formalize, easier to analyze, and even perform better in various circumstances as they allow for maximum genetic repair.”—[Hansen et al., 2015]

In 2017, OpenAI designed a scalable ES version without CMA (now called OpenAI-ES) but obtaining competitive performance for high-dimensional (deep neural network-based) policy search in reinforcement learning.

For some interesting applications of ES, please refer to [Yang et al., 2024, CVPR], just to name a few.

Parameters:
  • problem (dict) –

    problem arguments with the following common settings (keys):
    • ’fitness_function’ - objective function to be minimized (func),

    • ’ndim_problem’ - number of dimensionality (int),

    • ’upper_boundary’ - upper boundary of search range (array_like),

    • ’lower_boundary’ - lower boundary of search range (array_like).

  • options (dict) –

    optimizer options with the following common settings (keys):
    • ’max_function_evaluations’ - maximum of function evaluations (int, default: np.Inf),

    • ’max_runtime’ - maximal runtime to be allowed (float, default: np.Inf),

    • ’seed_rng’ - seed for random number generation needed to be explicitly set (int);

    and with the following particular settings (keys):
    • ’n_individuals’ - number of offspring/descendants, aka offspring population size (int),

    • ’n_parents’ - number of parents/ancestors, aka parental population size (int),

    • ’mean’ - initial (starting) point (array_like),

      • if not given, it will draw a random sample from the uniform distribution whose search range is bounded by problem[‘lower_boundary’] and problem[‘upper_boundary’].

    • ’sigma’ - initial global step-size, aka mutation strength (float).

mean

initial (starting) point, aka mean of Gaussian search/sampling/mutation distribution.

Type:

array_like

n_individuals

number of offspring/descendants, aka offspring population size.

Type:

int

n_parents

number of parents/ancestors, aka parental population size.

Type:

int

sigma

global step-size, aka mutation strength (i.e., overall std of Gaussian search distribution).

Type:

float

References

https://homepages.fhv.at/hgb/downloads/ES-Is-Not-Gradient-Follower.pdf

Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18), pp.1-65. https://www.jmlr.org/papers/v18/14-467.html

https://blog.otoro.net/2017/10/29/visual-evolution-strategies/

Hansen, N., Arnold, D.V. and Auger, A., 2015. Evolution strategies. In Springer Handbook of Computational Intelligence (pp. 871-898). Springer, Berlin, Heidelberg.

Bäck, T., Foussette, C., & Krause, P. (2013). Contemporary evolution strategies. Berlin: Springer. https://link.springer.com/book/10.1007/978-3-642-40137-4

http://www.scholarpedia.org/article/Evolution_strategies

Beyer, H.G. and Schwefel, H.P., 2002. Evolution strategies–A comprehensive introduction. Natural Computing, 1(1), pp.3-52. https://link.springer.com/article/10.1023/A:1015059928466

Rechenberg, I., 2000. Case studies in evolutionary experimentation and computation. Computer Methods in Applied Mechanics and Engineering, 186(2-4), pp.125-140. https://www.sciencedirect.com/science/article/abs/pii/S0045782599003813

Rechenberg, I., 1989. Evolution strategy: Nature’s way of optimization. In Optimization: Methods and Applications, Possibilities and Limitations (pp. 106-126). Springer, Berlin, Heidelberg. https://link.springer.com/chapter/10.1007/978-3-642-83814-9_6

Schwefel, H.P., 1988. Collective intelligence in evolving systems. In Ecodynamics (pp. 95-100). Springer, Berlin, Heidelberg. https://link.springer.com/chapter/10.1007/978-3-642-73953-8_8

Schwefel, H.P., 1984. Evolution strategies: A family of non-linear optimization techniques based on imitating some principles of organic evolution. Annals of Operations Research, 1(2), pp.165-167. https://link.springer.com/article/10.1007/BF01876146

Rechenberg, I., 1984. The evolution strategy. A mathematical model of darwinian evolution. In Synergetics—from Microscopic to Macroscopic Order (pp. 122-132). Springer, Berlin, Heidelberg. https://link.springer.com/chapter/10.1007/978-3-642-69540-7_13