optimization-evolution-strategy
Description
Evolutionary Optimization Strategy Design
Research Question
Design a novel combination of selection, crossover, and mutation operators (and/or a novel evolutionary loop) for continuous black-box optimization that outperforms standard approaches across multiple benchmark functions.
Background
Evolutionary algorithms (EAs) are population-based metaheuristics for black-box optimization. The three core operators — selection, crossover, and mutation — together with the overall evolutionary loop design, determine an EA's performance. Standard approaches include:
- Genetic Algorithms (GA): Tournament selection + Simulated Binary Crossover (SBX) + Polynomial Mutation
- CMA-ES: Adapts the covariance matrix of a multivariate Gaussian to guide search
- Differential Evolution (DE): Uses vector differences between population members for mutation
Each has strengths on different function landscapes (multimodal, ill-conditioned, high-dimensional), but no single strategy dominates all.
Task
Modify the editable section of custom_evolution.py (lines 87-225) to implement a novel or improved evolutionary strategy. You may modify:
custom_select(population, k, toolbox)— selection operatorcustom_crossover(ind1, ind2)— crossover/recombination operatorcustom_mutate(individual, lo, hi)— mutation operatorrun_evolution(...)— the full evolutionary loop (you can restructure the algorithm entirely)
The DEAP library (deap.base, deap.creator, deap.tools) is available. You may also use numpy, scipy, math, and random.
Interface
- Individuals: Lists of floats, each with a
.fitness.valuesattribute (tuple of one float for minimization). run_evolutionmust return(best_individual, fitness_history)wherefitness_historyis a list of best fitness per generation.- TRAIN_METRICS: Print
TRAIN_METRICS gen=G best_fitness=F avg_fitness=Aperiodically (every 50 generations). - Respect the function signature and return types — the evaluation harness below the editable section is fixed.
Evaluation
Strategies are evaluated on 4 benchmarks (all minimization, lower is better):
| Benchmark | Function | Dimensions | Domain | Global Minimum |
|---|---|---|---|---|
| rastrigin-30d | Rastrigin | 30 | [-5.12, 5.12] | 0 |
| rosenbrock-30d | Rosenbrock | 30 | [-5, 10] | 0 |
| ackley-30d | Ackley | 30 | [-32.768, 32.768] | 0 |
| rastrigin-100d | Rastrigin | 100 | [-5.12, 5.12] | 0 |
Metrics: best_fitness (final best value, lower is better) and convergence_gen (generation reaching near-final fitness).
Code
1#!/usr/bin/env python32"""Evolutionary Optimization Strategy Benchmark.34This script benchmarks an evolutionary optimization strategy on standard5continuous optimization test functions (Rastrigin, Rosenbrock, Ackley).6The goal is to minimize each function by designing effective selection,7crossover, and mutation operators.89Usage:10python deap/custom_evolution.py --function rastrigin --dim 30 --seed 4211"""1213import argparse14import math15import random
Results
| Model | Type | best fitness rastrigin-30d ↓ | best fitness rosenbrock-30d ↓ | best fitness ackley-30d ↓ | best fitness rastrigin-100d ↓ |
|---|---|---|---|---|---|
| cmaes | baseline | 3.648 | 10.637 | 0.000 | 5.970 |
| de | baseline | 256.813 | 48302.012 | 10.008 | 925.771 |
| ga_sbx | baseline | 8.052 | 138.347 | 1.116 | 113.784 |
| lshade | baseline | 8.155 | 12.143 | 0.000 | 135.454 |
| anthropic/claude-opus-4.6 | vanilla | 58.431 | 18.124 | 0.000 | 327.437 |
| deepseek-reasoner | vanilla | 35.826 | 5.291 | 0.010 | 65.735 |
| google/gemini-3.1-pro-preview | vanilla | 9.736 | 12.236 | 0.000 | 145.245 |
| openai/gpt-5.4-pro | vanilla | - | - | - | - |
| qwen3.6-plus:free | vanilla | 27.849 | 26.127 | 0.023 | 330.053 |
| anthropic/claude-opus-4.6 | agent | 44.806 | 16.108 | 0.000 | 503.621 |
| deepseek-reasoner | agent | 0.007 | 14.843 | 0.016 | 6.414 |
| google/gemini-3.1-pro-preview | agent | - | - | - | - |
| openai/gpt-5.4-pro | agent | 0.000 | 0.000 | 0.000 | 0.000 |
| qwen3.6-plus:free | agent | 27.849 | 26.127 | 0.023 | 330.053 |