optimization-evolution-strategy

Optimizationdeaprigorous codebase

Description

Evolutionary Optimization Strategy Design

Research Question

Design a novel combination of selection, crossover, and mutation operators (and/or a novel evolutionary loop) for continuous black-box optimization that outperforms standard approaches across multiple benchmark functions.

Background

Evolutionary algorithms (EAs) are population-based metaheuristics for black-box optimization. The three core operators — selection, crossover, and mutation — together with the overall evolutionary loop design, determine an EA's performance. Standard approaches include:

  • Genetic Algorithms (GA): Tournament selection + Simulated Binary Crossover (SBX) + Polynomial Mutation
  • CMA-ES: Adapts the covariance matrix of a multivariate Gaussian to guide search
  • Differential Evolution (DE): Uses vector differences between population members for mutation

Each has strengths on different function landscapes (multimodal, ill-conditioned, high-dimensional), but no single strategy dominates all.

Task

Modify the editable section of custom_evolution.py (lines 87-225) to implement a novel or improved evolutionary strategy. You may modify:

  • custom_select(population, k, toolbox) — selection operator
  • custom_crossover(ind1, ind2) — crossover/recombination operator
  • custom_mutate(individual, lo, hi) — mutation operator
  • run_evolution(...) — the full evolutionary loop (you can restructure the algorithm entirely)

The DEAP library (deap.base, deap.creator, deap.tools) is available. You may also use numpy, scipy, math, and random.

Interface

  • Individuals: Lists of floats, each with a .fitness.values attribute (tuple of one float for minimization).
  • run_evolution must return (best_individual, fitness_history) where fitness_history is a list of best fitness per generation.
  • TRAIN_METRICS: Print TRAIN_METRICS gen=G best_fitness=F avg_fitness=A periodically (every 50 generations).
  • Respect the function signature and return types — the evaluation harness below the editable section is fixed.

Evaluation

Strategies are evaluated on 4 benchmarks (all minimization, lower is better):

BenchmarkFunctionDimensionsDomainGlobal Minimum
rastrigin-30dRastrigin30[-5.12, 5.12]0
rosenbrock-30dRosenbrock30[-5, 10]0
ackley-30dAckley30[-32.768, 32.768]0
rastrigin-100dRastrigin100[-5.12, 5.12]0

Metrics: best_fitness (final best value, lower is better) and convergence_gen (generation reaching near-final fitness).

Code

custom_evolution.py
EditableRead-only
1#!/usr/bin/env python3
2"""Evolutionary Optimization Strategy Benchmark.
3
4This script benchmarks an evolutionary optimization strategy on standard
5continuous optimization test functions (Rastrigin, Rosenbrock, Ackley).
6The goal is to minimize each function by designing effective selection,
7crossover, and mutation operators.
8
9Usage:
10 python deap/custom_evolution.py --function rastrigin --dim 30 --seed 42
11"""
12
13import argparse
14import math
15import random

Results

ModelTypebest fitness rastrigin-30d best fitness rosenbrock-30d best fitness ackley-30d best fitness rastrigin-100d
cmaesbaseline3.64810.6370.0005.970
debaseline256.81348302.01210.008925.771
ga_sbxbaseline8.052138.3471.116113.784
lshadebaseline8.15512.1430.000135.454
anthropic/claude-opus-4.6vanilla58.43118.1240.000327.437
deepseek-reasonervanilla35.8265.2910.01065.735
google/gemini-3.1-pro-previewvanilla9.73612.2360.000145.245
openai/gpt-5.4-provanilla----
qwen3.6-plus:freevanilla27.84926.1270.023330.053
anthropic/claude-opus-4.6agent44.80616.1080.000503.621
deepseek-reasoneragent0.00714.8430.0166.414
google/gemini-3.1-pro-previewagent----
openai/gpt-5.4-proagent0.0000.0000.0000.000
qwen3.6-plus:freeagent27.84926.1270.023330.053

Agent Conversations