Agent Conversation: anthropic/claude-opus-4.6

Model: anthropic/claude-opus-4.6
Total steps: 6
Messages: 13
UserStep 0

Task: optimization-evolution-strategy

Evolutionary Optimization Strategy Design

Research Question

Design a novel combination of selection, crossover, and mutation operators (and/or a novel evolutionary loop) for continuous black-box optimization that outperforms standard approaches across multiple benchmark functions.

Background

Evolutionary algorithms (EAs) are population-based metaheuristics for black-box optimization. The three core operators — selection, crossover, and mutation — together with the overall evolutionary loop design, determine an EA's performance. Standard approaches include:

  • Genetic Algorithms (GA): Tournament selection + Simulated Binary Crossover (SBX) + Polynomial Mutation
  • CMA-ES: Adapts the covariance matrix of a multivariate Gaussian to guide search
  • Differential Evolution (DE): Uses vector differences between population members for mutation

Each has strengths on different function landscapes (multimodal, ill-conditioned, high-dimensional), but no single strategy dominates all.

Task

Modify the editable section of custom_evolution.py (lines 87-225) to implement a novel or improved evolutionary strategy. You may modify:

  • custom_select(population, k, toolbox) — selection operator
  • custom_crossover(ind1, ind2) — crossover/recombination operator
  • custom_mutate(individual, lo, hi) — mutation operator
  • run_evolution(...) — the full evolutionary loop (you can restructure the algorithm entirely)

The DEAP library (deap.base, deap.creator, deap.tools) is available. You may also use numpy, scipy, math, and random.

Interface

  • Individuals: Lists of floats, each with a .fitness.values attribute (tuple of one float for minimization).
  • run_evolution must return (best_individual, fitness_history) where fitness_history is a list of best fitness per generation.
  • TRAIN_METRICS: Print TRAIN_METRICS gen=G best_fitness=F avg_fitness=A periodically (every 50 generations).
  • Respect the function signature and return types — the evaluation harness below the editable section is fixed.

Evaluation

Strategies are evaluated on 4 benchmarks (all minimization, lower is better):

BenchmarkFunctionDimensionsDomainGlobal Minimum
rastrigin-30dRastrigin30[-5.12, 5.12]0
rosenbrock-30dRosenbrock30[-5, 10]0
ackley-30dAckley30[-32.768, 32.768]0
rastrigin-100dRastrigin100[-5.12, 5.12]0

Metrics: best_fitness (final best value, lower is better) and convergence_gen (generation reaching near-final fitness).

Hints

  • Consider adaptive operator selection or parameter control (e.g., adapting crossover/mutation rates during evolution).
  • Hybrid approaches combining ideas from GA, DE, and ES can be effective.
  • Elitism (preserving the best individuals across generations) often helps.
  • For multimodal functions like R
AssistantStep 1edit
Tool Input
1{
2  "op": "replace",
3  "filename": "deap/custom_evolution.py",
4  "start_line": 87,
5  "end_line": 225,
6  "content": "\ndef custom_select(population, k, toolbox=None):\n    return population[:k]\n\ndef custom_crossover(ind1, ind2):\n    return ind1, ind2\n\ndef custom_mutate(individual, lo, hi):\n    return (individual,)\n\n\ndef run_evolution(evaluate_func, dim, lo, hi, pop_size, n_generations,\n                  cx_prob, mut_prob, seed):\n    \"\"\"ERC-LSHADE: L-SHADE with Eigenvector-Rotated Crossover.\n\n    Novel contributions:\n    1. Binomial crossover in principal component space of elite covariance,\n       adapting search directions to landscape geometry.\n    2. jSO-style generation-dependent CR lower bounds.\n    3. Stagnation-triggered random diversity injection.\n    4. Bounce-back boundary handling instead of clipping.\n    5. L-SHADE adaptive F/CR with weighted Lehmer mean + archive.\n    6. Linear population size reduction.\n    \"\"\"\n    random.seed(seed)\n    np.random.seed(seed)\n    H, N_init, N_min = 5, pop_size, 4\n    toolbox = base.Toolbox()\n    toolbox.register(\"individual\", make_individual, toolbox, dim, lo, hi)\n    toolbox.register(\"population\", tools.initRepeat, list, toolbox.individual)\n    toolbox.register(\"evaluate\", evaluate_func)\n    pop = toolbox.population(n=N_init)\n    for ind in pop:\n        ind.fitness.values = toolbox.evaluate(ind)\n    M_F, M_CR, k_idx = [0.5] * H, [0.8] * H, 0\n    archive, fitness_history = [], []\n    best_ind = min(pop, key=lambda x: x.fitness.values[0])\n    best_fit = float(best_ind.fitness.values[0])\n    best_ever = creator.Individual(best_ind[:])\n    best_ever.fitness.values = (best_fit,)\n    stag, prev_best, B = 0, best_fit, np.eye(dim)\n    for gen in range(n_generations):\n        Nc = len(pop)\n        SF, SCR, dfs = [], [], []\n        sp = sorted(pop, key=lambda x: x.fitness.values[0])\n        gr = (gen + 1) / n_generations\n        # Update eigenvector rotation from elite covariance every 25 gens\n        if gen % 25 == 0 and Nc > dim:\n            ne = min(Nc - 1, max(dim, Nc // 3))\n            Ea = np.array([list(x) for x in sp[:ne]])\n            try:\n                _, B = np.linalg.eigh(np.cov(Ea.T) + 1e-10 * np.eye(dim))\n            except Exception:\n                B = np.eye(dim)\n        npop = []\n        for i in range(Nc):\n            rh = random.randint(0, H - 1)\n            Fi = -1\n            for _ in range(50):\n                Fi = M_F[rh] + 0.1 * np.random.standard_cauchy()\n                if Fi > 0:\n                    break\n            Fi = float(np.clip(Fi if Fi > 0 else 0.5, 0.1, 1.0))\n            CRi = float(np.clip(np.random.normal(M_CR[rh], 0.1), 0, 1))\n            if gr < 0.25:\n                CRi = max(CRi, 0.7)\n            elif gr < 0.5:\n                CRi = max(CRi, 0.6)\n            pi = random.uniform(max(2.0 / Nc, 0.05), 0.25)\n            pb = random.choice(sp[:max(1, int(round(pi * Nc)))])\n            ids = list(range(Nc)); ids.remove(i)\n            r1 = random.choice(ids)\n            ps = Nc + len(archive); r2 = i\n            while r2 == i or r2 == r1:\n                r2 = random.randint(0, ps - 1)\n            xr2 = pop[r2] if r2 < Nc else archive[r2 - Nc]\n            xi = np.array(pop[i])\n            v = xi + Fi * (np.array(pb) - xi) + Fi * (np.array(pop[r1]) - np.array(xr2))\n            # Novel: Eigenvector-rotated binomial crossover\n            # Crossover selects/rejects principal components instead of coordinates\n            xi_r, v_r = B.T @ xi, B.T @ v\n            mask = np.array([random.random() < CRi for _ in range(dim)])\n            mask[random.randint(0, dim - 1)] = True\n            ta = B @ np.where(mask, v_r, xi_r)\n            # Bounce-back boundary handling\n            for j in range(dim):\n                if ta[j] < lo:\n                    ta[j] = (lo + pop[i][j]) / 2.0\n                elif ta[j] > hi:\n                    ta[j] = (hi + pop[i][j]) / 2.0\n            trial = creator.Individual(ta.tolist())\n            trial.fitness.values = toolbox.evaluate(trial)\n            if trial.fitness.values[0] <= pop[i].fitness.values[0]:\n                if trial.fitness.values[0] < pop[i].fitness.values[0]:\n                    SF.append(Fi); SCR.append(CRi)\n                    dfs.append(pop[i].fitness.values[0] - trial.fitness.values[0])\n                archive.append(creator.Individual(pop[i][:]))\n                npop.append(trial)\n                if trial.fitness.values[0] < best_fit:\n                    best_ever = creator.Individual(trial[:])\n                    best_ever.fitness.values = trial.fitness.values\n                    best_fit = trial.fitness.values[0]\n            else:\n                npop.append(pop[i])\n        pop = npop\n        while len(archive) > N_init:\n            archive.pop(random.randint(0, len(archive) - 1))\n        # Weighted Lehmer mean parameter update\n        if SF:\n            w = np.array(dfs); w = w / (w.sum() + 1e-30)\n            sfa, sca = np.array(SF), np.array(SCR)\n            M_F[k_idx] = float(np.sum(w * sfa**2) / max(np.sum(w * sfa), 1e-30))\n            M_CR[k_idx] = max(float(np.sum(w * sca)), 0.0)\n            k_idx = (k_idx + 1) % H\n        # Stagnation-triggered diversity injection\n        if abs(best_fit - prev_best) < 1e-15:\n            stag += 1\n        else:\n            stag = 0\n        prev_best = best_fit\n        if stag >= 40 and len(pop) > N_min * 2:\n            nr = max(1, len(pop) // 5)\n            pop = sorted(pop, key=lambda x: x.fitness.values[0])\n            for idx in range(len(pop) - nr, len(pop)):\n                ni = creator.Individual([random.uniform(lo, hi) for _ in range(dim)])\n                ni.fitness.values = toolbox.evaluate(ni)\n                pop[idx] = ni\n            stag = 0\n        # Linear population size reduction\n        Nn = max(N_min, int(round(N_init + (N_min - N_init) * (gen + 1) / n_generations)))\n        if Nn < len(pop):\n            pop = sorted(pop, key=lambda x: x.fitness.values[0])[:Nn]\n        fitness_history.append(best_fit)\n        if (gen + 1) % 50 == 0 or gen == 0:\n            af = sum(x.fitness.values[0] for x in pop) / len(pop)\n            print(f\"TRAIN_METRICS gen={gen+1} best_fitness={best_fit:.6e} \"\n                  f\"avg_fitness={af:.6e}\", flush=True)\n    return best_ever, fitness_history\n"
7}
Tool ResultStep 1
OK: Replaced lines 87..225 with 138 line(s) in deap/custom_evolution.py. Editable range: 87–224.

[Current file: deap/custom_evolution.py | editable: 87–224 | total: 302 lines]
    84| # EDITABLE SECTION — Design your evolutionary strategy below
    85| # (lines 87 to 225)
    86| # ================================================================
    87  
    88  def custom_select(population, k, toolbox=None):
    89      return population[:k]
    90  
    91  def custom_crossover(ind1, ind2):
    92      return ind1, ind2
    93  
    94  def custom_mutate(individual, lo, hi):
    95      return (individual,)
    96  
    97  
    98  def run_evolution(evaluate_func, dim, lo, hi, pop_size, n_generations,
    99                    cx_prob, mut_prob, seed):
   100      """ERC-LSHADE: L-SHADE with Eigenvector-Rotated Crossover.
   101  
   102      Novel contributions:
   103      1. Binomial crossover in principal component space of elite covariance,
   104         adapting search direc
AssistantStep 2test
Tool ResultStep 2
[Test #1] (2 tests remaining; call submit(n=1) to submit this result)

### rastrigin-30d (scripts/rastrigin_30d.sh)
Training progress (rastrigin-30d):
TRAIN_METRICS gen=300 best_fitness=1.085690e+02 avg_fitness=1.781344e+02
TRAIN_METRICS gen=350 best_fitness=1.029260e+02 avg_fitness=1.462232e+02
TRAIN_METRICS gen=400 best_fitness=8.561242e+01 avg_fitness=1.206751e+02
TRAIN_METRICS gen=450 best_fitness=7.348370e+01 avg_fitness=9.599133e+01
TRAIN_METRICS gen=500 best_fitness=5.722461e+01 avg_fitness=6.299026e+01
best_fitness_rastrigin-30d: 5.722461e+01
convergence_gen_rastrigin-30d: 487

### rosenbrock-30d (scripts/rosenbrock_30d.sh)
Training progress (rosenbrock-30d):
TRAIN_METRICS gen=300 best_fitness=1.977594e+01 avg_fitness=1.990394e+01
TRAIN_METRICS gen=350 best_fitness=1.859254e+01 avg_fitness=1.874500e+01
TRAIN_METRICS gen=400 best_fitness=1.752426e+01 avg_fitness=1.760659e+01
TRAIN_METRICS gen=450 best_fitness=1.716199e+01 avg_fitness=1.717068e+01
TRAIN_METRICS gen=500 best_fitne
AssistantStep 3edit
Tool Input
1{
2  "op": "replace",
3  "filename": "deap/custom_evolution.py",
4  "start_line": 87,
5  "end_line": 224,
6  "content": "\ndef custom_select(population, k, toolbox=None):\n    return population[:k]\n\ndef custom_crossover(ind1, ind2):\n    return ind1, ind2\n\ndef custom_mutate(individual, lo, hi):\n    return (individual,)\n\n\ndef run_evolution(evaluate_func, dim, lo, hi, pop_size, n_generations,\n                  cx_prob, mut_prob, seed):\n    \"\"\"ERC-LSHADE: L-SHADE with Adaptive Eigenvector-Rotated Crossover.\n\n    Novel contributions:\n    1. Condition-number-adaptive rotation: binomial crossover operates in\n       principal component space only when landscape curvature warrants it\n       (high condition number = non-separable), avoiding harm on separable fns.\n    2. Bounce-back boundary handling.\n    3. L-SHADE adaptive F/CR with weighted Lehmer mean + archive.\n    4. Stagnation-triggered diversity injection.\n    5. Linear population size reduction.\n    \"\"\"\n    random.seed(seed)\n    np.random.seed(seed)\n    H, N_init, N_min = 6, pop_size, 4\n    toolbox = base.Toolbox()\n    toolbox.register(\"individual\", make_individual, toolbox, dim, lo, hi)\n    toolbox.register(\"population\", tools.initRepeat, list, toolbox.individual)\n    toolbox.register(\"evaluate\", evaluate_func)\n    pop = toolbox.population(n=N_init)\n    for ind in pop:\n        ind.fitness.values = toolbox.evaluate(ind)\n    M_F, M_CR, k_idx = [0.5] * H, [0.5] * H, 0\n    archive, fitness_history = [], []\n    best_ind = min(pop, key=lambda x: x.fitness.values[0])\n    best_fit = float(best_ind.fitness.values[0])\n    best_ever = creator.Individual(best_ind[:])\n    best_ever.fitness.values = (best_fit,)\n    stag, prev_best = 0, best_fit\n    B, p_rot = np.eye(dim), 0.0\n    for gen in range(n_generations):\n        Nc = len(pop)\n        SF, SCR, dfs = [], [], []\n        sp = sorted(pop, key=lambda x: x.fitness.values[0])\n        gr = (gen + 1) / n_generations\n        # Adaptive rotation: compute eigenvectors + condition-based probability\n        if gen % 25 == 0 and Nc > dim:\n            ne = min(Nc - 1, max(dim, Nc // 3))\n            Ea = np.array([list(x) for x in sp[:ne]])\n            try:\n                evals, B = np.linalg.eigh(np.cov(Ea.T) + 1e-10 * np.eye(dim))\n                evals = np.maximum(evals, 1e-30)\n                cond = evals[-1] / evals[0]\n                # High condition number → non-separable → use rotation\n                p_rot = float(np.clip(np.log10(max(cond, 1.0)) / 3.0, 0.0, 0.8))\n            except Exception:\n                B, p_rot = np.eye(dim), 0.0\n        npop = []\n        for i in range(Nc):\n            rh = random.randint(0, H - 1)\n            Fi = -1\n            for _ in range(50):\n                Fi = M_F[rh] + 0.1 * np.random.standard_cauchy()\n                if Fi > 0:\n                    break\n            Fi = float(np.clip(Fi if Fi > 0 else 0.5, 0.1, 1.0))\n            CRi = float(np.clip(np.random.normal(M_CR[rh], 0.1), 0, 1))\n            # Select pbest, r1, r2 for current-to-pbest/1\n            pi = random.uniform(max(2.0 / Nc, 0.05), 0.25)\n            pb = random.choice(sp[:max(1, int(round(pi * Nc)))])\n            ids = list(range(Nc)); ids.remove(i)\n            r1 = random.choice(ids)\n            ps = Nc + len(archive); r2 = i\n            while r2 == i or r2 == r1:\n                r2 = random.randint(0, ps - 1)\n            xr2 = pop[r2] if r2 < Nc else archive[r2 - Nc]\n            xi = np.array(pop[i])\n            v = xi + Fi * (np.array(pb) - xi) + Fi * (np.array(pop[r1]) - np.array(xr2))\n            # Crossover: adaptively choose rotated or standard\n            mask = np.random.random(dim) < CRi\n            mask[random.randint(0, dim - 1)] = True\n            if random.random() < p_rot:\n                # Rotated binomial crossover in eigenvector space\n                ta = B @ np.where(mask, B.T @ v, B.T @ xi)\n            else:\n                # Standard binomial crossover\n                ta = np.where(mask, v, xi)\n            # Bounce-back boundary handling\n            for j in range(dim):\n                if ta[j] < lo:\n                    ta[j] = (lo + pop[i][j]) / 2.0\n                elif ta[j] > hi:\n                    ta[j] = (hi + pop[i][j]) / 2.0\n            trial = creator.Individual(ta.tolist())\n            trial.fitness.values = toolbox.evaluate(trial)\n            if trial.fitness.values[0] <= pop[i].fitness.values[0]:\n                if trial.fitness.values[0] < pop[i].fitness.values[0]:\n                    SF.append(Fi); SCR.append(CRi)\n                    dfs.append(pop[i].fitness.values[0] - trial.fitness.values[0])\n                archive.append(creator.Individual(pop[i][:]))\n                npop.append(trial)\n                if trial.fitness.values[0] < best_fit:\n                    best_ever = creator.Individual(trial[:])\n                    best_ever.fitness.values = trial.fitness.values\n                    best_fit = trial.fitness.values[0]\n            else:\n                npop.append(pop[i])\n        pop = npop\n        while len(archive) > N_init:\n            archive.pop(random.randint(0, len(archive) - 1))\n        # Weighted Lehmer mean parameter update\n        if SF:\n            w = np.array(dfs); w = w / (w.sum() + 1e-30)\n            sfa, sca = np.array(SF), np.array(SCR)\n            M_F[k_idx] = float(np.sum(w * sfa**2) / max(np.sum(w * sfa), 1e-30))\n            M_CR[k_idx] = max(float(np.sum(w * sca)), 0.0)\n            k_idx = (k_idx + 1) % H\n        # Stagnation-triggered diversity injection\n        if abs(best_fit - prev_best) < 1e-15:\n            stag += 1\n        else:\n            stag = 0\n        prev_best = best_fit\n        if stag >= 40 and len(pop) > N_min * 2:\n            nr = max(1, len(pop) // 5)\n            pop = sorted(pop, key=lambda x: x.fitness.values[0])\n            for idx in range(len(pop) - nr, len(pop)):\n                ni = creator.Individual([random.uniform(lo, hi) for _ in range(dim)])\n                ni.fitness.values = toolbox.evaluate(ni)\n                pop[idx] = ni\n            stag = 0\n        # Linear population size reduction\n        Nn = max(N_min, int(round(N_init + (N_min - N_init) * (gen + 1) / n_generations)))\n        if Nn < len(pop):\n            pop = sorted(pop, key=lambda x: x.fitness.values[0])[:Nn]\n        fitness_history.append(best_fit)\n        if (gen + 1) % 50 == 0 or gen == 0:\n            af = sum(x.fitness.values[0] for x in pop) / len(pop)\n            print(f\"TRAIN_METRICS gen={gen+1} best_fitness={best_fit:.6e} \"\n                  f\"avg_fitness={af:.6e}\", flush=True)\n    return best_ever, fitness_history\n"
7}
Tool ResultStep 3
OK: Replaced lines 87..224 with 143 line(s) in deap/custom_evolution.py. Editable range: 87–229.

[Current file: deap/custom_evolution.py | editable: 87–229 | total: 307 lines]
    84| # EDITABLE SECTION — Design your evolutionary strategy below
    85| # (lines 87 to 225)
    86| # ================================================================
    87  
    88  def custom_select(population, k, toolbox=None):
    89      return population[:k]
    90  
    91  def custom_crossover(ind1, ind2):
    92      return ind1, ind2
    93  
    94  def custom_mutate(individual, lo, hi):
    95      return (individual,)
    96  
    97  
    98  def run_evolution(evaluate_func, dim, lo, hi, pop_size, n_generations,
    99                    cx_prob, mut_prob, seed):
   100      """ERC-LSHADE: L-SHADE with Adaptive Eigenvector-Rotated Crossover.
   101  
   102      Novel contributions:
   103      1. Condition-number-adaptive rotation: binomial crossover operates in
   104         principal comp
AssistantStep 4test
Tool ResultStep 4
[Test #2] (1 test remaining; call submit(n=2) to submit this result)

### rastrigin-30d (scripts/rastrigin_30d.sh)
Training progress (rastrigin-30d):
TRAIN_METRICS gen=300 best_fitness=9.538431e+01 avg_fitness=1.256134e+02
TRAIN_METRICS gen=350 best_fitness=7.658390e+01 avg_fitness=1.102033e+02
TRAIN_METRICS gen=400 best_fitness=7.658390e+01 avg_fitness=9.716800e+01
TRAIN_METRICS gen=450 best_fitness=6.787799e+01 avg_fitness=8.209904e+01
TRAIN_METRICS gen=500 best_fitness=5.843059e+01 avg_fitness=6.355083e+01
best_fitness_rastrigin-30d: 5.843059e+01
convergence_gen_rastrigin-30d: 491

### rosenbrock-30d (scripts/rosenbrock_30d.sh)
Training progress (rosenbrock-30d):
TRAIN_METRICS gen=300 best_fitness=2.153270e+01 avg_fitness=2.164106e+01
TRAIN_METRICS gen=350 best_fitness=2.011825e+01 avg_fitness=2.033770e+01
TRAIN_METRICS gen=400 best_fitness=1.890906e+01 avg_fitness=1.902692e+01
TRAIN_METRICS gen=450 best_fitness=1.850606e+01 avg_fitness=1.854498e+01
TRAIN_METRICS gen=500 best_fitnes
AssistantStep 5edit
Tool Input
1{
2  "op": "replace",
3  "filename": "deap/custom_evolution.py",
4  "start_line": 121,
5  "end_line": 121,
6  "content": "    M_F, M_CR, k_idx = [0.5] * H, [0.8] * H, 0"
7}
Tool ResultStep 5
OK: Replaced lines 121..121 with 1 line(s) in deap/custom_evolution.py. Editable range: 87–229.

[Current file: deap/custom_evolution.py | editable: 87–229 | total: 307 lines]
    84| # EDITABLE SECTION — Design your evolutionary strategy below
    85| # (lines 87 to 225)
    86| # ================================================================
    87  
    88  def custom_select(population, k, toolbox=None):
    89      return population[:k]
    90  
    91  def custom_crossover(ind1, ind2):
    92      return ind1, ind2
    93  
    94  def custom_mutate(individual, lo, hi):
    95      return (individual,)
    96  
    97  
    98  def run_evolution(evaluate_func, dim, lo, hi, pop_size, n_generations,
    99                    cx_prob, mut_prob, seed):
   100      """ERC-LSHADE: L-SHADE with Adaptive Eigenvector-Rotated Crossover.
   101  
   102      Novel contributions:
   103      1. Condition-number-adaptive rotation: binomial crossover operates in
   104         principal compo
AssistantStep 6test
Tool ResultStep 6
[Test #3] (0 tests remaining)

### rastrigin-30d (scripts/rastrigin_30d.sh)
Training progress (rastrigin-30d):
TRAIN_METRICS gen=300 best_fitness=7.362434e+01 avg_fitness=1.720799e+02
TRAIN_METRICS gen=350 best_fitness=7.315411e+01 avg_fitness=1.547701e+02
TRAIN_METRICS gen=400 best_fitness=6.463486e+01 avg_fitness=8.937165e+01
TRAIN_METRICS gen=450 best_fitness=5.810935e+01 avg_fitness=7.618863e+01
TRAIN_METRICS gen=500 best_fitness=4.480560e+01 avg_fitness=5.683133e+01
best_fitness_rastrigin-30d: 4.480560e+01
convergence_gen_rastrigin-30d: 491

### rosenbrock-30d (scripts/rosenbrock_30d.sh)
Training progress (rosenbrock-30d):
TRAIN_METRICS gen=300 best_fitness=1.949610e+01 avg_fitness=1.961315e+01
TRAIN_METRICS gen=350 best_fitness=1.800993e+01 avg_fitness=1.811651e+01
TRAIN_METRICS gen=400 best_fitness=1.690900e+01 avg_fitness=1.699516e+01
TRAIN_METRICS gen=450 best_fitness=1.638302e+01 avg_fitness=1.645996e+01
TRAIN_METRICS gen=500 best_fitness=1.610766e+01 avg_fitness=1.612323e+01