Agent Conversation: gpt-5.4
Model: gpt-5.4
Total steps: 7
Messages: 16
UserStep 0
Task: llm-pretrain-optimizer
LLM Pretraining: Optimizer & Learning Rate Schedule Optimization
Research Question
Design an improved optimizer and/or learning rate schedule for GPT-2 language model pretraining. Your modifications should reduce validation loss compared to the standard AdamW with cosine annealing schedule.
What You Can Modify
Two regions in custom_pretrain.py:
- configure_optimizers method (lines 172-189): Optimizer creation and parameter grouping
- get_lr function (lines 192-201): Learning rate schedule
You can modify:
- The optimization algorithm (default: AdamW with fused implementation)
- Parameter grouping strategy (default: weight decay for 2D params, no decay for 1D)
- Learning rate schedule shape (default: cosine with linear warmup)
- Any optimizer hyperparameters
Note: The training loop calls get_lr(it, warmup_iters, lr_decay_iters, learning_rate, min_lr) — keep this signature compatible. The optimizer returned by configure_optimizers must support .zero_grad(), .step(), and .param_groups.
Evaluation
- Metric: Validation loss (cross-entropy, lower is better), plus perplexity (WikiText-2, LAMBADA) and downstream accuracy (ARC-Easy, HellaSwag, PIQA, WinoGrande)
- Model: GPT-2 Medium (24L/16H/1024D, ~355M params)
- Dataset: FineWeb 10B (GPT-2 tokenizer), ~7.1B tokens (D=20N Chinchilla-optimal)
- Training: 12030 iterations, BSZ=96, GA=6, 2-GPU DDP
- Hardware: H200 GPU
nanoGPT/custom_pretrain.py [EDITABLE — lines 171–189, lines 245–247 only]
1: """Custom GPT-2 Pretraining Script
2: Based on Andrej Karpathy's nanoGPT, evaluated on FineWeb dataset.
3: """
4:
5: import math
6: import inspect
7: import os
8: import time
9: from contextlib import nullcontext
10: from dataclasses import dataclass
11:
12: import numpy as np
13: import torch
14: import torch.nn as nn
15: from torch.nn import functional as F
16:
17: # ============================================================================
18: # Model Components
19: # ============================================================================
20:
21: # ── Normalization ──────────────────────────────────────────────────────────
22: class LayerNorm(nn.Module):
23: """LayerNorm but with an optional bias."""
24: def __init__(self, ndim, bias):
25: super().__init__()
26: self.weight = nn.Parameter(torch.ones(ndim))
27: self.bias = nn.Parameter(torch.zeros(ndim)) if bias else None
28:
29: def forward(self, input):
30: return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5)
31:
32: # ── Self-Attention ─────────────────────────────────────────────────────────
33: class CausalSelfAttention(nn.Module):
34: def __init__(self, config):
35: super().__init__()
36: assert config.nAssistantStep 1edit
Tool Input
1{
2 "op": "replace",
3 "filename": "nanoGPT/custom_pretrain.py",
4 "start_line": 171,
5 "end_line": 189,
6 "content": " def configure_optimizers(self, weight_decay, learning_rate, betas, device_type):\n param_dict = {pn: p for pn, p in self.named_parameters() if p.requires_grad}\n decay_named = [(n, p) for n, p in param_dict.items() if p.dim() >= 2]\n nodecay_params = [p for n, p in param_dict.items() if p.dim() < 2]\n\n # Novel contribution: trust-ratio AdamW with variance-aware layerwise scaling.\n # Matrix parameters use an adaptive trust ratio (LAMB-style) while embeddings\n # are excluded from trust scaling to avoid unstable norm chasing on token tables.\n trust_params = [p for n, p in decay_named if 'wte' not in n and 'wpe' not in n and 'lm_head' not in n]\n plain_decay_params = [p for n, p in decay_named if 'wte' in n or 'wpe' in n or 'lm_head' in n]\n\n num_decay_params = sum(p.numel() for _, p in decay_named)\n num_nodecay_params = sum(p.numel() for p in nodecay_params)\n print(f\"num decayed parameter tensors: {len(decay_named)}, with {num_decay_params:,} parameters\")\n print(f\"num non-decayed parameter tensors: {len(nodecay_params)}, with {num_nodecay_params:,} parameters\")\n\n class TrustAdamW(torch.optim.Optimizer):\n def __init__(self, params, lr=1e-3, betas=(0.9, 0.95), eps=1e-8, weight_decay=0.0):\n defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay)\n super().__init__(params, defaults)\n\n @torch.no_grad()\n def step(self):\n for group in self.param_groups:\n beta1, beta2 = group['betas']\n lr = group['lr']\n eps = group['eps']\n wd = group.get('weight_decay', 0.0)\n use_trust = group.get('use_trust', False)\n for p in group['params']:\n if p.grad is None:\n continue\n g = p.grad\n state = self.state[p]\n if len(state) == 0:\n state['step'] = 0\n state['exp_avg'] = torch.zeros_like(p)\n state['exp_avg_sq'] = torch.zeros_like(p)\n exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']\n state['step'] += 1\n step = state['step']\n\n exp_avg.mul_(beta1).add_(g, alpha=1 - beta1)\n exp_avg_sq.mul_(beta2).addcmul_(g, g, value=1 - beta2)\n\n bias_c1 = 1 - beta1 ** step\n bias_c2 = 1 - beta2 ** step\n adam_update = (exp_avg / bias_c1) / ((exp_avg_sq / bias_c2).sqrt().add_(eps))\n\n if wd != 0.0:\n p.mul_(1 - lr * wd)\n\n step_scale = 1.0\n if use_trust:\n w_norm = torch.norm(p).clamp(min=1e-3)\n u_norm = torch.norm(adam_update).clamp(min=1e-6)\n trust_ratio = (w_norm / u_norm).clamp(min=0.25, max=4.0)\n rms = g.pow(2).mean().sqrt()\n step_scale = trust_ratio / (1.0 + 0.5 * rms)\n\n p.add_(adam_update, alpha=-lr * step_scale)\n\n optim_groups = [\n {'params': trust_params, 'weight_decay': weight_decay, 'use_trust': True},\n {'params': plain_decay_params, 'weight_decay': weight_decay, 'use_trust': False},\n {'params': nodecay_params, 'weight_decay': 0.0, 'use_trust': False},\n ]\n optimizer = TrustAdamW(optim_groups, lr=learning_rate, betas=betas)\n print(\"using TrustAdamW optimizer\")\n return optimizer"
7}Tool ResultStep 1
OK: Replaced lines 171..189 with 70 line(s) in nanoGPT/custom_pretrain.py. Editable range: 171–240, 296–298.
[Current file: nanoGPT/custom_pretrain.py | editable: 171–240, 296–298 | total: 488 lines]
171 def configure_optimizers(self, weight_decay, learning_rate, betas, device_type):
172 param_dict = {pn: p for pn, p in self.named_parameters() if p.requires_grad}
173 decay_named = [(n, p) for n, p in param_dict.items() if p.dim() >= 2]
... (64 more lines) ...
238 optimizer = TrustAdamW(optim_groups, lr=learning_rate, betas=betas)
239 print("using TrustAdamW optimizer")
240 return optimizer
...
296 # CONFIG_OVERRIDES: override training hyperparameters for your method.
297 # Allowed keys: learning_rate, weight_decay, warmup_iters, min_lr, grad_clip.
298 CONFIG_OVERRIDES = {}AssistantStep 2edit
Tool Input
1{
2 "op": "replace",
3 "filename": "nanoGPT/custom_pretrain.py",
4 "start_line": 296,
5 "end_line": 298,
6 "content": " # CONFIG_OVERRIDES: override training hyperparameters for your method.\n # Allowed keys: learning_rate, weight_decay, warmup_iters, min_lr, grad_clip.\n CONFIG_OVERRIDES = {'learning_rate': 8e-4, 'warmup_iters': int(max_iters * 0.06), 'min_lr': learning_rate / 20}"
7}Tool ResultStep 2
OK: Replaced lines 296..298 with 3 line(s) in nanoGPT/custom_pretrain.py. Editable range: 171–240, 296–298.
[Current file: nanoGPT/custom_pretrain.py | editable: 171–240, 296–298 | total: 488 lines]
171 def configure_optimizers(self, weight_decay, learning_rate, betas, device_type):
172 param_dict = {pn: p for pn, p in self.named_parameters() if p.requires_grad}
173 decay_named = [(n, p) for n, p in param_dict.items() if p.dim() >= 2]
... (64 more lines) ...
238 optimizer = TrustAdamW(optim_groups, lr=learning_rate, betas=betas)
239 print("using TrustAdamW optimizer")
240 return optimizer
...
296 # CONFIG_OVERRIDES: override training hyperparameters for your method.
297 # Allowed keys: learning_rate, weight_decay, warmup_iters, min_lr, grad_clip.
298 CONFIG_OVERRIDES = {'learning_rate': 8e-4, 'warmup_iters': int(max_iters * 0.06), 'min_lr': learning_rate / 20}AssistantStep 3test
Tool ResultStep 3
[Test #1] (2 tests remaining; call submit(n=N) to choose which test result to submit as final)
### gpt-345m (scripts/gpt_345m.sh)
Training metrics (last steps):
TRAIN_METRICS: step=8000, train_loss=2.6477, val_loss=2.3707
TRAIN_METRICS: step=9000, train_loss=2.6084, val_loss=2.3352
TRAIN_METRICS: step=10000, train_loss=2.5952, val_loss=2.3188
TRAIN_METRICS: step=11000, train_loss=2.5730, val_loss=2.2917
TRAIN_METRICS: step=12000, train_loss=2.5726, val_loss=2.2915
Final metrics (gpt-345m): val_loss_gpt-345m=2.2854, wikitext2_ppl_gpt-345m=45.0900, lambada_ppl_gpt-345m=70.0600
### lm-eval-345m (scripts/lm_eval_345m.sh)
Final metrics (lm-eval-345m): arc_easy_lm-eval-345m=55.1300, hellaswag_lm-eval-345m=33.0100, piqa_lm-eval-345m=63.6600, winogrande_lm-eval-345m=49.5700
[Leaderboard] Results saved: {'val_loss_gpt-345m': 2.2854, 'wikitext2_ppl_gpt-345m': 45.09, 'lambada_ppl_gpt-345m': 70.06, 'elapsed_gpt-345m': 20903, 'arc_easy_lm-eval-345m': 55.13, 'hellaswag_lm-eval-345m': 33.01, 'piqa
AssistantStep 4edit
Tool Input
1{
2 "op": "replace",
3 "filename": "nanoGPT/custom_pretrain.py",
4 "start_line": 171,
5 "end_line": 240,
6 "content": " def configure_optimizers(self, weight_decay, learning_rate, betas, device_type):\n param_dict = {pn: p for pn, p in self.named_parameters() if p.requires_grad}\n decay_named = [(n, p) for n, p in param_dict.items() if p.dim() >= 2]\n nodecay_named = [(n, p) for n, p in param_dict.items() if p.dim() < 2]\n\n # Novel contribution: norm-gated hybrid optimizer.\n # Large matrix weights use trust-ratio AdamW, while embeddings and all\n # 1D parameters use a conservative AdamW path. This improves scale\n # invariance on transformer projections without destabilizing token tables.\n trust_params = [p for n, p in decay_named if 'wte' not in n and 'wpe' not in n and 'lm_head' not in n]\n plain_decay_params = [p for n, p in decay_named if 'wte' in n or 'wpe' in n or 'lm_head' in n]\n nodecay_params = [p for _, p in nodecay_named]\n\n num_decay_params = sum(p.numel() for _, p in decay_named)\n num_nodecay_params = sum(p.numel() for _, p in nodecay_named)\n print(f\"num decayed parameter tensors: {len(decay_named)}, with {num_decay_params:,} parameters\")\n print(f\"num non-decayed parameter tensors: {len(nodecay_named)}, with {num_nodecay_params:,} parameters\")\n\n class TrustAdamW(torch.optim.Optimizer):\n def __init__(self, params, lr=1e-3, betas=(0.9, 0.95), eps=1e-8, trust_clip=(0.5, 2.0)):\n defaults = dict(lr=lr, betas=betas, eps=eps, trust_clip=trust_clip)\n super().__init__(params, defaults)\n\n @torch.no_grad()\n def step(self):\n for group in self.param_groups:\n beta1, beta2 = group['betas']\n lr = group['lr']\n eps = group['eps']\n wd = group.get('weight_decay', 0.0)\n mode = group.get('mode', 'adam')\n trust_lo, trust_hi = group['trust_clip']\n for p in group['params']:\n if p.grad is None:\n continue\n g = p.grad\n state = self.state[p]\n if len(state) == 0:\n state['step'] = 0\n state['exp_avg'] = torch.zeros_like(p)\n state['exp_avg_sq'] = torch.zeros_like(p)\n exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']\n state['step'] += 1\n step = state['step']\n\n exp_avg.mul_(beta1).add_(g, alpha=1 - beta1)\n exp_avg_sq.mul_(beta2).addcmul_(g, g, value=1 - beta2)\n\n m_hat = exp_avg / (1 - beta1 ** step)\n v_hat = exp_avg_sq / (1 - beta2 ** step)\n adam_update = m_hat / (v_hat.sqrt().add(eps))\n\n if wd != 0.0:\n p.mul_(1 - lr * wd)\n\n step_scale = 1.0\n if mode == 'trust':\n w_norm = torch.norm(p).clamp(min=1e-3)\n u_norm = torch.norm(adam_update).clamp(min=1e-6)\n grad_rms = g.pow(2).mean().sqrt()\n state['grad_rms_ema'] = state.get('grad_rms_ema', grad_rms)\n state['grad_rms_ema'] = state['grad_rms_ema'] * 0.95 + grad_rms * 0.05\n rms_gate = (state['grad_rms_ema'] / (grad_rms + 1e-8)).clamp(0.8, 1.2)\n trust_ratio = (w_norm / u_norm).clamp(min=trust_lo, max=trust_hi)\n step_scale = trust_ratio * rms_gate\n\n p.add_(adam_update, alpha=-lr * step_scale)\n\n optim_groups = [\n {'params': trust_params, 'weight_decay': weight_decay, 'mode': 'trust'},\n {'params': plain_decay_params, 'weight_decay': weight_decay, 'mode': 'adam'},\n {'params': nodecay_params, 'weight_decay': 0.0, 'mode': 'adam'},\n ]\n optimizer = TrustAdamW(optim_groups, lr=learning_rate, betas=(0.9, 0.96), eps=1e-8, trust_clip=(0.5, 2.0))\n print(\"using norm-gated TrustAdamW optimizer\")\n return optimizer"
7}Tool ResultStep 4
OK: Replaced lines 171..240 with 76 line(s) in nanoGPT/custom_pretrain.py. Editable range: 171–246, 302–304.
[Current file: nanoGPT/custom_pretrain.py | editable: 171–246, 302–304 | total: 494 lines]
171 def configure_optimizers(self, weight_decay, learning_rate, betas, device_type):
172 param_dict = {pn: p for pn, p in self.named_parameters() if p.requires_grad}
173 decay_named = [(n, p) for n, p in param_dict.items() if p.dim() >= 2]
... (70 more lines) ...
244 optimizer = TrustAdamW(optim_groups, lr=learning_rate, betas=(0.9, 0.96), eps=1e-8, trust_clip=(0.5, 2.0))
245 print("using norm-gated TrustAdamW optimizer")
246 return optimizer
...
302 # CONFIG_OVERRIDES: override training hyperparameters for your method.
303 # Allowed keys: learning_rate, weight_decay, warmup_iters, min_lr, grad_clip.
304 CONFIG_OVERRIDES = {'learning_rate': 8e-4, 'warmup_iters': int(max_iters * 0.06), 'min_l
AssistantStep 5edit
Tool Input
1{
2 "op": "replace",
3 "filename": "nanoGPT/custom_pretrain.py",
4 "start_line": 302,
5 "end_line": 304,
6 "content": " # CONFIG_OVERRIDES: override training hyperparameters for your method.\n # Allowed keys: learning_rate, weight_decay, warmup_iters, min_lr, grad_clip.\n CONFIG_OVERRIDES = {'learning_rate': 7e-4, 'warmup_iters': int(max_iters * 0.08), 'min_lr': 3.5e-5}"
7}Tool ResultStep 5
OK: Replaced lines 302..304 with 3 line(s) in nanoGPT/custom_pretrain.py. Editable range: 171–246, 302–304.
[Current file: nanoGPT/custom_pretrain.py | editable: 171–246, 302–304 | total: 494 lines]
171 def configure_optimizers(self, weight_decay, learning_rate, betas, device_type):
172 param_dict = {pn: p for pn, p in self.named_parameters() if p.requires_grad}
173 decay_named = [(n, p) for n, p in param_dict.items() if p.dim() >= 2]
... (70 more lines) ...
244 optimizer = TrustAdamW(optim_groups, lr=learning_rate, betas=(0.9, 0.96), eps=1e-8, trust_clip=(0.5, 2.0))
245 print("using norm-gated TrustAdamW optimizer")
246 return optimizer
...
302 # CONFIG_OVERRIDES: override training hyperparameters for your method.
303 # Allowed keys: learning_rate, weight_decay, warmup_iters, min_lr, grad_clip.
304 CONFIG_OVERRIDES = {'learning_rate': 7e-4, 'warmup_iters': int(max_iters * 0.08), 'min_lr
AssistantStep 6test
Tool ResultStep 6
[Test #2] (1 test remaining; call submit(n=N) to choose which test result to submit as final)
### gpt-345m (scripts/gpt_345m.sh)
Training metrics (last steps):
TRAIN_METRICS: step=8000, train_loss=2.6095, val_loss=2.3347
TRAIN_METRICS: step=9000, train_loss=2.5700, val_loss=2.2995
TRAIN_METRICS: step=10000, train_loss=2.5553, val_loss=2.2836
TRAIN_METRICS: step=11000, train_loss=2.5312, val_loss=2.2554
TRAIN_METRICS: step=12000, train_loss=2.5285, val_loss=2.2542
Final metrics (gpt-345m): val_loss_gpt-345m=2.2470, wikitext2_ppl_gpt-345m=42.0200, lambada_ppl_gpt-345m=64.9700
### lm-eval-345m (scripts/lm_eval_345m.sh)
Final metrics (lm-eval-345m): arc_easy_lm-eval-345m=57.1500, hellaswag_lm-eval-345m=33.9800, piqa_lm-eval-345m=64.4700, winogrande_lm-eval-345m=48.8600
[Leaderboard] Results saved: {'val_loss_gpt-345m': 2.247, 'wikitext2_ppl_gpt-345m': 42.02, 'lambada_ppl_gpt-345m': 64.97, 'elapsed_gpt-345m': 21130, 'arc_easy_lm-eval-345m': 57.15, 'hellaswag_lm-eval-345m': 33.98, 'piqa_l
AssistantStep 7submit
Tool Input
1{
2 "n": 2
3}Tool ResultStep 7
[submit] Submitting result from test #2 as final.
### gpt-345m (scripts/gpt_345m.sh)
Training metrics (last steps):
TRAIN_METRICS: step=8000, train_loss=2.6095, val_loss=2.3347
TRAIN_METRICS: step=9000, train_loss=2.5700, val_loss=2.2995
TRAIN_METRICS: step=10000, train_loss=2.5553, val_loss=2.2836
TRAIN_METRICS: step=11000, train_loss=2.5312, val_loss=2.2554
TRAIN_METRICS: step=12000, train_loss=2.5285, val_loss=2.2542
Final metrics (gpt-345m): val_loss_gpt-345m=2.2470, wikitext2_ppl_gpt-345m=42.0200, lambada_ppl_gpt-345m=64.9700
### lm-eval-345m (scripts/lm_eval_345m.sh)
Final metrics (lm-eval-345m): arc_easy_lm-eval-345m=57.1500, hellaswag_lm-eval-345m=33.9800, piqa_lm-eval-345m=64.4700, winogrande_lm-eval-345m=48.8600
[submit] Finalized test #2 as final.
[Leaderboard] Results saved: {'val_loss_gpt-345m': 2.247, 'wikitext2_ppl_gpt-345m': 42.02, 'lambada_ppl_gpt-345m': 64.97, 'elapsed_gpt-345m': 21130, 'arc_easy_lm-eval-345m': 57.15, 'hellaswag_lm-eval-345m': 33.98, 'piqa_lm-eval