Agent Conversation: gemini-3.1-pro-preview
Task: llm-pretrain-bitlinear
LLM Pretraining: Native Low-Bit Linear (BitLinear)
Research Question
Design a low-bit linear layer for GPT-2 pretraining that uses native low-precision weights (binary/ternary) during both training and inference, instead of standard float weights. The goal is to minimize validation loss while constraining weights to a small discrete set.
Background
Standard neural networks store and compute with full-precision (FP32/BF16) weights. Post-training quantization (PTQ) and quantization-aware training (QAT) attempt to compress these weights after or during training, but the model fundamentally trains with float weights. Native low-bit training takes a different approach: weights are inherently discrete (e.g., {-1, +1} or {-1, 0, +1}) during every forward pass, with float latent weights maintained only for gradient accumulation.
This paradigm was introduced by BitNet (Wang et al., 2023), which binarized weights to {-1, +1} using the sign function, and extended by BitNet b1.58 (Ma et al., 2024), which used ternary {-1, 0, +1} weights via absmean quantization. The key insight is that these models can match or approach full-precision performance at a fraction of the effective parameter cost.
Key differences from QAT (the llm-pretrain-quantization task):
- QAT: float weights -> fake quantize during training -> real quantize at deployment (weights are float during training)
- BitLinear: float latent weights -> discrete quantize in every forward pass (weights are always discrete during computation)
Key differences from mixed-precision (the llm-pretrain-precision task):
- Mixed precision: changes the float format (FP32 -> BF16/FP8) but values are still continuous
- BitLinear: weights are restricted to a small discrete set (1-2 bits)
What You Can Modify
The BitLinear module (lines 38-115) in custom_pretrain.py:
weight_quant(weight)-- quantizes float latent weights to discrete values, returns (quantized_weight, scale)activation_quant(x)-- optional activation quantization, returns (quantized_x, scale)BitLinearclass -- linear layer that uses the above functions
Notes on the interface:
BitLinear.__init__(self, in_features, out_features, bias=True)must keepself.weightas a ParameterBitLinear.forward(self, x) -> outputwhere x has shape(..., in_features)and output has shape(..., out_features)- The quantization is applied in every forward pass (both training and eval) -- there is no separate train/eval path
weight_quantshould return(quantized_weight, scale)wherequantized_weight * scaleapproximates the original weightactivation_quantshould return(quantized_x, scale)similarly- All model linear layers (attention, MLP, lm_head) use
BitLinear - You may add helper classes (autograd Functions, learned parameters) alongside the existing functions
- Must be compatible with
torch.compile(no@torch.compiler.disable)
Evaluation
- **Primary me
Tool Input
1{
2 "filename": "nanoGPT/custom_pretrain.py",
3 "content": "def weight_quant(weight):\n \"\"\"L2-Optimal Ternary Quantization with Weight Centralization.\n \n Computes the analytically optimal scale that minimizes L2 quantization error \n for the given ternary assignments, rather than using a static mean absolute value.\n \"\"\"\n # 1. Centralize weights per output channel\n mean = weight.mean(dim=1, keepdim=True)\n w_c = weight - mean\n \n # 2. Initial scale estimation to determine assignments\n scale_est = w_c.detach().abs().mean(dim=1, keepdim=True).clamp(min=1e-12)\n w_normed = w_c / scale_est\n \n # 3. Get ternary assignments {-1, 0, 1}\n w_q = w_normed.clamp(-1, 1).round()\n \n # 4. Compute L2-optimal scale per channel dynamically\n num = (w_c.detach() * w_q).sum(dim=1, keepdim=True)\n den = (w_q * w_q).sum(dim=1, keepdim=True).clamp(min=1.0)\n opt_scale = num / den\n \n # 5. Straight-Through Estimator (STE)\n w_q_ste = (w_q - w_normed).detach() + w_normed\n \n return w_q_ste, opt_scale\n\n\ndef activation_quant(x):\n \"\"\"Token-wise L2-Optimal 8-bit Quantization.\"\"\"\n Qb = 127.0\n # 1. Per-token initial scale estimation\n scale_est = x.detach().abs().max(dim=-1, keepdim=True)[0].clamp(min=1e-12) / Qb\n x_normed = x / scale_est\n \n # 2. 8-bit assignments\n x_q = x_normed.round().clamp(-Qb, Qb)\n \n # 3. Compute L2-optimal scale per token\n num = (x.detach() * x_q).sum(dim=-1, keepdim=True)\n den = (x_q * x_q).sum(dim=-1, keepdim=True).clamp(min=1.0)\n opt_scale = num / den\n \n # 4. STE\n x_q_ste = (x_q - x_normed).detach() + x_normed\n \n return x_q_ste, opt_scale\n\n\nclass BitLinear(nn.Module):\n \"\"\"Linear layer with L2-Optimal Native Low-Bit Weights.\"\"\"\n def __init__(self, in_features, out_features, bias=True):\n super().__init__()\n self.in_features = in_features\n self.out_features = out_features\n self.weight = nn.Parameter(torch.empty(out_features, in_features))\n if bias:\n self.bias = nn.Parameter(torch.zeros(out_features))\n else:\n self.bias = None\n nn.init.normal_(self.weight, mean=0.0, std=0.02)\n\n def forward(self, x):\n w_q, w_scale = weight_quant(self.weight)\n x_q, x_scale = activation_quant(x)\n \n # Core low-bit matmul\n out = F.linear(x_q, w_q, None)\n \n # Rescale output: w_scale is [O, 1], x_scale is [..., 1]\n out = out * (w_scale.view(-1) * x_scale)\n \n if self.bias is not None:\n out = out + self.bias\n return out\n",
4 "end_line": 115,
5 "op": "replace",
6 "start_line": 38
7}OK: Replaced lines 38..115 with 75 line(s) in nanoGPT/custom_pretrain.py. Editable range: 38–112, 325–325.
[Current file: nanoGPT/custom_pretrain.py | editable: 38–112 | total: 515 lines]
35| return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5)
36|
37| # -- Native Low-Bit Linear (BitLinear) Module ---------------------------------
38 def weight_quant(weight):
39 """L2-Optimal Ternary Quantization with Weight Centralization.
40
41 Computes the analytically optimal scale that minimizes L2 quantization error
42 for the given ternary assignments, rather than using a static mean absolute value.
43 """
44 # 1. Centralize weights per output channel
45 mean = weight.mean(dim=1, keepdim=True)
46 w_c = weight - mean
47
48 # 2. Initial scale estimation to determine assignments
49 scale_est = w_c.detach().abs().mean(dim=1, keepdim=True).clamp(min=1
Tool Input
1{
2 "end_line": 325,
3 "content": " CONFIG_OVERRIDES = {'weight_decay': 0.0}",
4 "start_line": 325,
5 "op": "replace",
6 "filename": "nanoGPT/custom_pretrain.py"
7}OK: Replaced lines 325..325 with 1 line(s) in nanoGPT/custom_pretrain.py. Editable range: 38–112, 325–325.
[Current file: nanoGPT/custom_pretrain.py | editable: 38–112 | total: 515 lines]
35| return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5)
36|
37| # -- Native Low-Bit Linear (BitLinear) Module ---------------------------------
38 def weight_quant(weight):
39 """L2-Optimal Ternary Quantization with Weight Centralization.
40
41 Computes the analytically optimal scale that minimizes L2 quantization error
42 for the given ternary assignments, rather than using a static mean absolute value.
43 """
44 # 1. Centralize weights per output channel
45 mean = weight.mean(dim=1, keepdim=True)
46 w_c = weight - mean
47
48 # 2. Initial scale estimation to determine assignments
49 scale_est = w_c.detach().abs().mean(dim=1, keepdim=True).clamp(min=1
[Test #1] (2 tests remaining; call submit(n=N) to choose which test result to submit as final) ### gpt-345m (scripts/gpt_345m.sh) [COMMAND FAILED exit=-1] [output file not found: /scratch/gpfs/CHIJ/bohan/MLS-Bench/logs/llm-pretrain-bitlinear/gemini-3.1-pro-preview_20260412_102253/20260412_102712/group_1/gpt-345m_s42.out] ### lm-eval-345m (scripts/lm_eval_345m.sh) [COMMAND FAILED exit=1] ERROR: Checkpoint not found: /scratch/gpfs/CHIJ/bohan/MLS-Bench/.saves/llm-pretrain-bitlinear/gemini-3.1-pro-preview_20260412_102253/seed_42/ckpt_gpt-345m.pt
Tool Input
1{
2 "op": "replace",
3 "content": "def weight_quant(weight):\n \"\"\"L2-Optimal Ternary Quantization with Weight Centralization.\"\"\"\n # 1. Centralize weights per output channel\n mean = weight.mean(dim=1, keepdim=True)\n w_c = weight - mean\n \n # 2. Initial scale estimation (detached)\n scale_est = w_c.detach().abs().mean(dim=1, keepdim=True).clamp(min=1e-12)\n w_normed = w_c / scale_est\n \n # 3. Discrete assignments (detached)\n w_q_det = w_normed.detach().clamp(-1, 1).round()\n \n # 4. Compute L2-optimal scale per channel dynamically (detached)\n num = (w_c.detach() * w_q_det).sum(dim=1, keepdim=True)\n den = (w_q_det * w_q_det).sum(dim=1, keepdim=True).clamp(min=1.0)\n opt_scale = num / den\n \n # 5. Straight-Through Estimator (STE)\n w_q_ste = (w_q_det - w_normed).detach() + w_normed\n \n return w_q_ste, opt_scale\n\n\ndef activation_quant(x):\n \"\"\"Token-wise L2-Optimal 8-bit Quantization.\"\"\"\n Qb = 127.0\n # 1. Per-token initial scale estimation (detached)\n scale_est = x.detach().abs().amax(dim=-1, keepdim=True).clamp(min=1e-12) / Qb\n x_normed = x / scale_est\n \n # 2. Discrete 8-bit assignments (detached)\n x_q_det = x_normed.detach().round().clamp(-Qb, Qb)\n \n # 3. Compute L2-optimal scale per token (detached)\n num = (x.detach() * x_q_det).sum(dim=-1, keepdim=True)\n den = (x_q_det * x_q_det).sum(dim=-1, keepdim=True).clamp(min=1.0)\n opt_scale = num / den\n \n # 4. STE\n x_q_ste = (x_q_det - x_normed).detach() + x_normed\n \n return x_q_ste, opt_scale\n\n\nclass BitLinear(nn.Module):\n \"\"\"Linear layer with L2-Optimal Native Low-Bit Weights.\"\"\"\n def __init__(self, in_features, out_features, bias=True):\n super().__init__()\n self.in_features = in_features\n self.out_features = out_features\n self.weight = nn.Parameter(torch.empty(out_features, in_features))\n if bias:\n self.bias = nn.Parameter(torch.zeros(out_features))\n else:\n self.bias = None\n nn.init.normal_(self.weight, mean=0.0, std=0.02)\n\n def forward(self, x):\n w_q, w_scale = weight_quant(self.weight)\n x_q, x_scale = activation_quant(x)\n \n # Core low-bit matmul\n out = F.linear(x_q, w_q, None)\n \n # Rescale output sequentially to avoid large intermediate broadcast tensors\n out = out * w_scale.view(-1)\n out = out * x_scale\n \n if self.bias is not None:\n out = out + self.bias\n return out\n",
4 "start_line": 38,
5 "filename": "nanoGPT/custom_pretrain.py",
6 "end_line": 112
7}OK: Replaced lines 38..112 with 72 line(s) in nanoGPT/custom_pretrain.py. Editable range: 38–109, 322–322.
[Current file: nanoGPT/custom_pretrain.py | editable: 38–109 | total: 512 lines]
35| return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5)
36|
37| # -- Native Low-Bit Linear (BitLinear) Module ---------------------------------
38 def weight_quant(weight):
39 """L2-Optimal Ternary Quantization with Weight Centralization."""
40 # 1. Centralize weights per output channel
41 mean = weight.mean(dim=1, keepdim=True)
42 w_c = weight - mean
43
44 # 2. Initial scale estimation (detached)
45 scale_est = w_c.detach().abs().mean(dim=1, keepdim=True).clamp(min=1e-12)
46 w_normed = w_c / scale_est
47
48 # 3. Discrete assignments (detached)
49 w_q_det = w_normed.detach().clamp(-1, 1).round()
50
51 # 4. Compute L2-optimal scale pe
[Test #2] (1 test remaining; call submit(n=N) to choose which test result to submit as final)
### gpt-345m (scripts/gpt_345m.sh)
Training metrics (last steps):
TRAIN_METRICS: step=9000, train_loss=2.8681, val_loss=2.5806
TRAIN_METRICS: step=10000, train_loss=2.8492, val_loss=2.5511
TRAIN_METRICS: step=11000, train_loss=2.8291, val_loss=2.5380
TRAIN_METRICS: step=12000, train_loss=2.8099, val_loss=2.5220
TRAIN_METRICS: step=13000, train_loss=2.7946, val_loss=2.5093
Final metrics (gpt-345m): val_loss_gpt-345m=2.5040, wikitext2_ppl_gpt-345m=57.3300, lambada_ppl_gpt-345m=86.7200
### lm-eval-345m (scripts/lm_eval_345m.sh)
Final metrics (lm-eval-345m): arc_easy_lm-eval-345m=50.2500, hellaswag_lm-eval-345m=30.8600, piqa_lm-eval-345m=61.6400, winogrande_lm-eval-345m=50.5100
[Leaderboard] Results saved: {'val_loss_gpt-345m': 2.504, 'wikitext2_ppl_gpt-345m': 57.33, 'lambada_ppl_gpt-345m': 86.72, 'elapsed_gpt-345m': 24767, 'arc_easy_lm-eval-345m': 50.25, 'hellaswag_lm-eval-345m': 30.86, 'piqa_
Tool Input
1{
2 "n": 2
3}[submit] Submitting result from test #2 as final.
### gpt-345m (scripts/gpt_345m.sh)
Training metrics (last steps):
TRAIN_METRICS: step=9000, train_loss=2.8681, val_loss=2.5806
TRAIN_METRICS: step=10000, train_loss=2.8492, val_loss=2.5511
TRAIN_METRICS: step=11000, train_loss=2.8291, val_loss=2.5380
TRAIN_METRICS: step=12000, train_loss=2.8099, val_loss=2.5220
TRAIN_METRICS: step=13000, train_loss=2.7946, val_loss=2.5093
Final metrics (gpt-345m): val_loss_gpt-345m=2.5040, wikitext2_ppl_gpt-345m=57.3300, lambada_ppl_gpt-345m=86.7200
### lm-eval-345m (scripts/lm_eval_345m.sh)
Final metrics (lm-eval-345m): arc_easy_lm-eval-345m=50.2500, hellaswag_lm-eval-345m=30.8600, piqa_lm-eval-345m=61.6400, winogrande_lm-eval-345m=50.5100
[submit] Finalized test #2 as final.
[Leaderboard] Results saved: {'val_loss_gpt-345m': 2.504, 'wikitext2_ppl_gpt-345m': 57.33, 'lambada_ppl_gpt-345m': 86.72, 'elapsed_gpt-345m': 24767, 'arc_easy_lm-eval-345m': 50.25, 'hellaswag_lm-eval-345m': 30.86, 'piqa_lm-eva