Agent Conversation: anthropic/claude-opus-4.6
Task: ml-subgroup-calibration-shift
Subgroup Calibration Under Distribution Shift
Research Question
Design a post-hoc calibration method that remains reliable when subgroup composition shifts between calibration and test time.
Background
Many calibration methods look good on average but fail on protected or operational subgroups once the test distribution shifts. This task isolates that failure mode. The fixed pipeline trains a tabular classifier, then applies a user-defined calibration mapping on held-out calibration data before evaluation on shifted test data.
Classical baselines include:
- Temperature scaling: one global temperature for all samples
- Isotonic regression: non-parametric monotone calibration
- Beta calibration: a richer parametric mapping on probabilities
- Group-wise temperature scaling: separate temperatures per subgroup
Task
Modify the CalibrationMethod class in custom_subgroup_calibration.py. The fixed code loads data, creates a shifted split, trains the base classifier, and computes metrics. Your method only controls the post-hoc calibration mapping.
class CalibrationMethod:
def fit(self, probs, labels, groups=None):
...
def predict_proba(self, probs, groups=None):
...
Inputs are positive-class probabilities from the base classifier. groups contains subgroup IDs when available and may be ignored by group-agnostic methods.
Evaluation
This benchmark uses three lightweight tabular proxies that are already available in the current scikit-learn package setup. We would normally prefer Adult, ACSIncome, COMPAS, and Law School Admissions, but those require package-level data changes that are outside this task directory. To keep the benchmark runnable offline, we use cached scikit-learn datasets with similar calibration and subgroup-shift behavior:
- breast_cancer: binary classification on the scikit-learn breast cancer dataset
- california_housing: binary high-value/low-value decision built from California housing
- diabetes: binary high-risk/low-risk decision built from the diabetes target
For each dataset, the split is intentionally shifted:
- a domain score determines the held-out test tail
- subgroup labels are quartiles of a separate proxy feature
- calibration is fit on the source region and evaluated on the shifted region
Metrics
Lower is better for:
- worst_group_ece
- brier
- max_subgroup_gap
Higher is better for:
- subgroup_auroc
Notes
- The task is deliberately low compute and should run with a small tabular classifier.
- If you need the exact Adult/ACSIncome/COMPAS/Law School datasets, they should be added through a package-level data change, not inside this task directory.
scikit-learn/custom_subgroup_calibration.py [EDITABLE — lines 72–91 only]
1: """Subgroup calibration under distribution shift.
2:
3: The benchmark is intentionally offline and low compute. It usesTool Input
1{
2 "op": "replace",
3 "filename": "scikit-learn/custom_subgroup_calibration.py",
4 "start_line": 72,
5 "end_line": 91,
6 "content": "class CalibrationMethod:\n \"\"\"Distributionally Robust Hierarchical Platt Calibration (DRH-Platt).\n\n Novel algorithm combining DRO with hierarchical Bayesian shrinkage for\n post-hoc calibration under subgroup distribution shift.\n\n Key ideas:\n 1. DRO-Platt: Fits Platt scaling minimizing worst-case NLL over a\n chi-squared uncertainty set (implemented as mean + eta*std/sqrt(n)),\n providing robustness to moderate distribution shifts.\n 2. Hierarchical shrinkage: Per-group calibrators are shrunk toward the\n global calibrator via empirical Bayes, with strength adapted per-group\n based on sample size and estimated between-group variance.\n 3. Adaptive DRO: Smaller groups get stronger DRO penalty to limit overfit.\n \"\"\"\n\n def __init__(self):\n self.eps = 1e-6\n self.global_a = 1.0\n self.global_b = 0.0\n self.group_params = {}\n self._dro_eta = 0.5\n self._min_group_size = 8\n self._shrinkage_base = 12.0\n\n def _sample_losses(self, a, b, logits, labels):\n cal = special.expit(a * logits + b)\n p = np.clip(cal, self.eps, 1.0 - self.eps)\n return -(labels * np.log(p) + (1 - labels) * np.log(1 - p))\n\n def _fit_dro_platt(self, logits, labels, eta=None):\n \"\"\"Fit Platt scaling with DRO variance penalty:\n min_{a,b} E[loss] + eta * sqrt(Var[loss] / n)\n This is equivalent to optimizing over a chi-squared divergence ball.\n \"\"\"\n if eta is None:\n eta = self._dro_eta\n n = len(labels)\n\n def objective(params):\n a, b = params\n losses = self._sample_losses(a, b, logits, labels)\n mean_loss = np.mean(losses)\n if n > 1 and eta > 0:\n var_loss = np.var(losses, ddof=1)\n penalty = eta * np.sqrt(var_loss / n + 1e-10)\n else:\n penalty = 0.0\n return mean_loss + penalty\n\n result = optimize.minimize(\n objective, x0=[1.0, 0.0], method=\"Nelder-Mead\",\n options={\"maxiter\": 3000, \"xatol\": 1e-7, \"fatol\": 1e-9},\n )\n return (result.x[0], result.x[1]) if result.success else (1.0, 0.0)\n\n def fit(self, probs, labels, groups=None):\n probs = np.asarray(probs).reshape(-1)\n labels = np.asarray(labels).reshape(-1).astype(int)\n logits = special.logit(np.clip(probs, self.eps, 1.0 - self.eps))\n\n # Stage 1: Global DRO-Platt calibration\n self.global_a, self.global_b = self._fit_dro_platt(logits, labels)\n self.group_params = {}\n\n if groups is None:\n return self\n\n groups = np.asarray(groups).reshape(-1)\n local_fits = {}\n\n # Stage 2: Per-group DRO-Platt with adaptive DRO strength\n for g in np.unique(groups):\n mask = groups == g\n n_g = mask.sum()\n if n_g < self._min_group_size or np.unique(labels[mask]).size < 2:\n self.group_params[int(g)] = (self.global_a, self.global_b)\n else:\n # Stronger DRO for smaller groups to prevent overfit\n eta_g = self._dro_eta * (1.0 + 10.0 / n_g)\n a_loc, b_loc = self._fit_dro_platt(\n logits[mask], labels[mask], eta=eta_g\n )\n local_fits[int(g)] = (a_loc, b_loc, n_g)\n\n # Stage 3: Empirical Bayes shrinkage toward global\n if len(local_fits) >= 2:\n a_vals = np.array([v[0] for v in local_fits.values()])\n b_vals = np.array([v[1] for v in local_fits.values()])\n ns = np.array([v[2] for v in local_fits.values()])\n # Between-group variance minus estimated within-group noise\n tau2_a = max(np.var(a_vals) - np.mean(1.0 / ns), 0.01)\n tau2_b = max(np.var(b_vals) - np.mean(1.0 / ns), 0.01)\n else:\n tau2_a = tau2_b = 1.0\n\n for g, (a_loc, b_loc, n_g) in local_fits.items():\n sigma2 = 1.0 / n_g\n # James-Stein-style shrinkage\n shrink_a = sigma2 / (sigma2 + tau2_a)\n shrink_b = sigma2 / (sigma2 + tau2_b)\n # Floor: minimum shrinkage for distribution shift robustness\n min_s = self._shrinkage_base / (n_g + self._shrinkage_base)\n shrink_a = max(shrink_a, min_s)\n shrink_b = max(shrink_b, min_s)\n\n self.group_params[g] = (\n shrink_a * self.global_a + (1 - shrink_a) * a_loc,\n shrink_b * self.global_b + (1 - shrink_b) * b_loc,\n )\n\n return self\n\n def predict_proba(self, probs, groups=None):\n probs = np.asarray(probs).reshape(-1)\n logits = special.logit(np.clip(probs, self.eps, 1.0 - self.eps))\n\n if groups is None:\n cal = special.expit(self.global_a * logits + self.global_b)\n return np.clip(cal, self.eps, 1.0 - self.eps)\n\n groups = np.asarray(groups).reshape(-1)\n out = np.empty_like(probs, dtype=np.float64)\n for g in np.unique(groups):\n mask = groups == g\n a, b = self.group_params.get(int(g), (self.global_a, self.global_b))\n out[mask] = special.expit(a * logits[mask] + b)\n return np.clip(out, self.eps, 1.0 - self.eps)\n"
7}OK: Replaced lines 72..91 with 128 line(s) in scikit-learn/custom_subgroup_calibration.py. Editable range: 72–199.
[Current file: scikit-learn/custom_subgroup_calibration.py | editable: 72–199 | total: 340 lines]
72 class CalibrationMethod:
73 """Distributionally Robust Hierarchical Platt Calibration (DRH-Platt).
74
... (122 more lines) ...
197 a, b = self.group_params.get(int(g), (self.global_a, self.global_b))
198 out[mask] = special.expit(a * logits[mask] + b)
199 return np.clip(out, self.eps, 1.0 - self.eps)[Test #1] (2 tests remaining; call submit(n=N) to choose which test result to submit as final)
### breast_cancer (scripts/breast_cancer.sh)
Training progress:
TRAIN_METRICS: dataset=breast_cancer cal_ece_before=0.014909 cal_ece_after=0.000001 cal_brier_before=0.001191 cal_brier_after=0.000000
Final metrics (breast_cancer): worst_group_ece_breast_cancer=0.359511, brier_breast_cancer=0.179772, subgroup_auroc_breast_cancer=0.956156, max_subgroup_gap_breast_cancer=0.351811
### diabetes (scripts/diabetes.sh)
Training progress:
TRAIN_METRICS: dataset=diabetes cal_ece_before=0.141277 cal_ece_after=0.104188 cal_brier_before=0.154074 cal_brier_after=0.144692
Final metrics (diabetes): worst_group_ece_diabetes=0.154292, brier_diabetes=0.162241, subgroup_auroc_diabetes=0.765085, max_subgroup_gap_diabetes=0.062442
[Leaderboard] Results saved: {'worst_group_ece_breast_cancer': 0.359511, 'brier_breast_cancer': 0.179772, 'subgroup_auroc_breast_cancer': 0.956156, 'max_subgroup_gap_breast_cancer': 0.
Tool Input
1{
2 "op": "replace",
3 "filename": "scikit-learn/custom_subgroup_calibration.py",
4 "start_line": 72,
5 "end_line": 199,
6 "content": "class CalibrationMethod:\n \"\"\"Distributionally Robust Hierarchical Platt Calibration with\n Bayesian Posterior Tempering (DRH-Platt-BPT).\n\n Novel contributions:\n 1. DRO-Platt: Platt scaling fit minimizing worst-case NLL over a\n chi-squared divergence ball (mean + eta*std/sqrt(n) penalty).\n 2. Adaptive Identity Regularization: L2 penalty toward identity\n (a=1, b=0), strength inversely proportional to pre-calibration\n loss — prevents overfit when the model is already well-calibrated.\n 3. Bayesian Posterior Tempering: Per-sample logit tempering derived\n from posterior variance of (a,b) via Laplace approximation and\n the probit-sigmoid averaging identity. Extreme predictions get\n more softening where parameter uncertainty has maximal effect.\n 4. Hierarchical empirical Bayes shrinkage for per-group calibration.\n \"\"\"\n\n def __init__(self):\n self.eps = 1e-6\n self.global_a = 1.0\n self.global_b = 0.0\n self.group_params = {}\n self._dro_eta = 0.5\n self._min_group_size = 8\n self._shrinkage_base = 12.0\n self._reg_base = 0.1\n self._var_a = 0.0\n self._var_b = 0.0\n\n def _sample_losses(self, a, b, logits, labels):\n cal = special.expit(a * logits + b)\n p = np.clip(cal, self.eps, 1.0 - self.eps)\n return -(labels * np.log(p) + (1 - labels) * np.log(1 - p))\n\n def _fit_dro_platt(self, logits, labels, eta=None, reg_weight=None):\n \"\"\"Fit Platt scaling with DRO variance penalty and adaptive L2.\"\"\"\n if eta is None:\n eta = self._dro_eta\n n = len(labels)\n if reg_weight is None:\n identity_nll = float(np.mean(\n self._sample_losses(1.0, 0.0, logits, labels)\n ))\n reg_weight = self._reg_base / (identity_nll + 0.01)\n\n def objective(params):\n a, b = params\n losses = self._sample_losses(a, b, logits, labels)\n mean_loss = float(np.mean(losses))\n dro_pen = 0.0\n if n > 1 and eta > 0:\n dro_pen = eta * np.sqrt(float(np.var(losses, ddof=1)) / n + 1e-10)\n reg = reg_weight * ((a - 1.0) ** 2 + b ** 2)\n return mean_loss + dro_pen + reg\n\n result = optimize.minimize(\n objective, x0=[1.0, 0.0], method=\"Nelder-Mead\",\n options={\"maxiter\": 3000, \"xatol\": 1e-7, \"fatol\": 1e-9},\n )\n a, b = result.x\n return (a, b) if np.isfinite(a) and np.isfinite(b) else (1.0, 0.0)\n\n def _posterior_variance(self, logits, labels, a, b):\n \"\"\"Posterior variance of (a,b) via Laplace approximation (Hessian).\"\"\"\n n = len(labels)\n h = 1e-4\n f0 = float(np.mean(self._sample_losses(a, b, logits, labels)))\n faa = (\n float(np.mean(self._sample_losses(a + h, b, logits, labels)))\n + float(np.mean(self._sample_losses(a - h, b, logits, labels)))\n - 2 * f0\n ) / (h * h)\n fbb = (\n float(np.mean(self._sample_losses(a, b + h, logits, labels)))\n + float(np.mean(self._sample_losses(a, b - h, logits, labels)))\n - 2 * f0\n ) / (h * h)\n return 1.0 / (n * max(faa, 1e-3)), 1.0 / (n * max(fbb, 1e-3))\n\n def fit(self, probs, labels, groups=None):\n probs = np.asarray(probs).reshape(-1)\n labels = np.asarray(labels).reshape(-1).astype(int)\n logits = special.logit(np.clip(probs, self.eps, 1.0 - self.eps))\n\n # Stage 1: Global DRO-Platt with adaptive regularization\n self.global_a, self.global_b = self._fit_dro_platt(logits, labels)\n self.group_params = {}\n\n # Stage 2: Bayesian posterior tempering via Laplace approximation\n self._var_a, self._var_b = self._posterior_variance(\n logits, labels, self.global_a, self.global_b\n )\n\n if groups is None:\n return self\n\n # Stage 3: Per-group DRO-Platt with adaptive DRO strength\n groups = np.asarray(groups).reshape(-1)\n local_fits = {}\n\n for g in np.unique(groups):\n mask = groups == g\n n_g = mask.sum()\n if n_g < self._min_group_size or np.unique(labels[mask]).size < 2:\n self.group_params[int(g)] = (self.global_a, self.global_b)\n else:\n eta_g = self._dro_eta * (1.0 + 10.0 / n_g)\n a_loc, b_loc = self._fit_dro_platt(\n logits[mask], labels[mask], eta=eta_g\n )\n local_fits[int(g)] = (a_loc, b_loc, n_g)\n\n # Stage 4: Empirical Bayes (James-Stein) shrinkage toward global\n if len(local_fits) >= 2:\n a_vals = np.array([v[0] for v in local_fits.values()])\n b_vals = np.array([v[1] for v in local_fits.values()])\n ns = np.array([v[2] for v in local_fits.values()])\n tau2_a = max(np.var(a_vals) - np.mean(1.0 / ns), 0.01)\n tau2_b = max(np.var(b_vals) - np.mean(1.0 / ns), 0.01)\n else:\n tau2_a = tau2_b = 1.0\n\n for g, (a_loc, b_loc, n_g) in local_fits.items():\n sigma2 = 1.0 / n_g\n shrink_a = sigma2 / (sigma2 + tau2_a)\n shrink_b = sigma2 / (sigma2 + tau2_b)\n min_s = self._shrinkage_base / (n_g + self._shrinkage_base)\n shrink_a = max(shrink_a, min_s)\n shrink_b = max(shrink_b, min_s)\n self.group_params[g] = (\n shrink_a * self.global_a + (1 - shrink_a) * a_loc,\n shrink_b * self.global_b + (1 - shrink_b) * b_loc,\n )\n\n return self\n\n def _bayesian_temp(self, logits):\n \"\"\"Per-sample temperature from posterior uncertainty.\n\n From the probit-sigmoid averaging identity:\n E[sigma(f + eps)] ~ sigma(f / sqrt(1 + pi*Var(eps)/3))\n where Var(eps) = var_a * logit^2 + var_b for f = a*logit + b.\n \"\"\"\n return np.sqrt(\n 1.0 + (np.pi / 3.0) * (self._var_a * logits ** 2 + self._var_b)\n )\n\n def predict_proba(self, probs, groups=None):\n probs = np.asarray(probs).reshape(-1)\n logits = special.logit(np.clip(probs, self.eps, 1.0 - self.eps))\n temp = self._bayesian_temp(logits)\n\n if groups is None:\n cal = special.expit((self.global_a * logits + self.global_b) / temp)\n return np.clip(cal, self.eps, 1.0 - self.eps)\n\n groups = np.asarray(groups).reshape(-1)\n out = np.empty_like(probs, dtype=np.float64)\n for g in np.unique(groups):\n mask = groups == g\n a, b = self.group_params.get(int(g), (self.global_a, self.global_b))\n out[mask] = special.expit((a * logits[mask] + b) / temp[mask])\n return np.clip(out, self.eps, 1.0 - self.eps)\n"
7}OK: Replaced lines 72..199 with 163 line(s) in scikit-learn/custom_subgroup_calibration.py. Editable range: 72–234.
[Current file: scikit-learn/custom_subgroup_calibration.py | editable: 72–234 | total: 375 lines]
72 class CalibrationMethod:
73 """Distributionally Robust Hierarchical Platt Calibration with
74 Bayesian Posterior Tempering (DRH-Platt-BPT).
... (157 more lines) ...
232 a, b = self.group_params.get(int(g), (self.global_a, self.global_b))
233 out[mask] = special.expit((a * logits[mask] + b) / temp[mask])
234 return np.clip(out, self.eps, 1.0 - self.eps)[Test #2] (1 test remaining; call submit(n=N) to choose which test result to submit as final)
### breast_cancer (scripts/breast_cancer.sh)
Training progress:
TRAIN_METRICS: dataset=breast_cancer cal_ece_before=0.014909 cal_ece_after=0.070812 cal_brier_before=0.001191 cal_brier_after=0.006920
Final metrics (breast_cancer): worst_group_ece_breast_cancer=0.186083, brier_breast_cancer=0.118946, subgroup_auroc_breast_cancer=0.984643, max_subgroup_gap_breast_cancer=0.124515
### diabetes (scripts/diabetes.sh)
Training progress:
TRAIN_METRICS: dataset=diabetes cal_ece_before=0.141277 cal_ece_after=0.071847 cal_brier_before=0.154074 cal_brier_after=0.154605
Final metrics (diabetes): worst_group_ece_diabetes=0.174743, brier_diabetes=0.162446, subgroup_auroc_diabetes=0.765085, max_subgroup_gap_diabetes=0.067680
[Leaderboard] Results saved: {'worst_group_ece_breast_cancer': 0.186083, 'brier_breast_cancer': 0.118946, 'subgroup_auroc_breast_cancer': 0.984643, 'max_subgroup_gap_breast_cancer': 0.1
Tool Input
1{
2 "op": "replace",
3 "filename": "scikit-learn/custom_subgroup_calibration.py",
4 "start_line": 72,
5 "end_line": 234,
6 "content": "class CalibrationMethod:\n \"\"\"Distributionally Robust Hierarchical Platt Calibration with\n Bayesian Posterior Tempering and Adaptive Non-Parametric Blending.\n\n Novel contributions:\n 1. DRO-Platt: Platt scaling with chi-squared DRO variance penalty\n for shift robustness.\n 2. Adaptive Identity Regularization: L2 toward identity (a=1,b=0)\n inversely weighted by pre-calibration loss—preserves already\n well-calibrated models.\n 3. Slope-Only Bayesian Tempering: Per-sample logit inflation from\n posterior variance of slope parameter via probit approximation.\n Only uses var(a)*logit^2 so moderate predictions stay unchanged\n while extreme ones are appropriately softened.\n 4. Hierarchical empirical Bayes (James-Stein) shrinkage for groups.\n 5. Adaptive non-parametric blending: for groups with sufficient data,\n blends parametric Platt with non-parametric isotonic calibration\n to capture non-linear patterns while preserving extrapolation\n robustness under shift.\n \"\"\"\n\n def __init__(self):\n self.eps = 1e-6\n self.global_a = 1.0\n self.global_b = 0.0\n self.group_params = {}\n self.group_iso = {}\n self._dro_eta = 0.6\n self._min_group_size = 8\n self._iso_threshold = 50\n self._iso_weight = 0.25\n self._shrinkage_base = 15.0\n self._reg_base = 0.1\n self._var_a = 0.0\n\n def _sample_losses(self, a, b, logits, labels):\n cal = special.expit(a * logits + b)\n p = np.clip(cal, self.eps, 1.0 - self.eps)\n return -(labels * np.log(p) + (1 - labels) * np.log(1 - p))\n\n def _fit_dro_platt(self, logits, labels, eta=None, reg_weight=None):\n \"\"\"Fit Platt scaling with DRO variance penalty + adaptive L2.\"\"\"\n if eta is None:\n eta = self._dro_eta\n n = len(labels)\n if reg_weight is None:\n identity_nll = float(np.mean(\n self._sample_losses(1.0, 0.0, logits, labels)\n ))\n reg_weight = self._reg_base / (identity_nll + 0.01)\n\n def objective(params):\n a, b = params\n losses = self._sample_losses(a, b, logits, labels)\n mean_loss = float(np.mean(losses))\n dro_pen = 0.0\n if n > 1 and eta > 0:\n dro_pen = eta * np.sqrt(\n float(np.var(losses, ddof=1)) / n + 1e-10\n )\n reg = reg_weight * ((a - 1.0) ** 2 + b ** 2)\n return mean_loss + dro_pen + reg\n\n result = optimize.minimize(\n objective, x0=[1.0, 0.0], method=\"Nelder-Mead\",\n options={\"maxiter\": 3000, \"xatol\": 1e-7, \"fatol\": 1e-9},\n )\n a, b = result.x\n return (a, b) if np.isfinite(a) and np.isfinite(b) else (1.0, 0.0)\n\n def _posterior_var_a(self, logits, labels, a, b):\n \"\"\"Posterior variance of slope via Laplace (diagonal Hessian).\"\"\"\n n = len(labels)\n h = 1e-4\n f0 = float(np.mean(self._sample_losses(a, b, logits, labels)))\n faa = (\n float(np.mean(self._sample_losses(a + h, b, logits, labels)))\n + float(np.mean(self._sample_losses(a - h, b, logits, labels)))\n - 2 * f0\n ) / (h * h)\n return 1.0 / (n * max(faa, 1e-3))\n\n def fit(self, probs, labels, groups=None):\n probs = np.asarray(probs).reshape(-1)\n labels = np.asarray(labels).reshape(-1).astype(int)\n logits = special.logit(np.clip(probs, self.eps, 1.0 - self.eps))\n\n # Stage 1: Global DRO-Platt with adaptive regularization\n self.global_a, self.global_b = self._fit_dro_platt(logits, labels)\n self.group_params = {}\n self.group_iso = {}\n\n # Stage 2: Slope-only posterior variance for Bayesian tempering\n self._var_a = self._posterior_var_a(\n logits, labels, self.global_a, self.global_b\n )\n\n if groups is None:\n return self\n\n # Stage 3: Per-group DRO-Platt with adaptive DRO strength\n groups = np.asarray(groups).reshape(-1)\n local_fits = {}\n\n for g in np.unique(groups):\n mask = groups == g\n n_g = mask.sum()\n if n_g < self._min_group_size or np.unique(labels[mask]).size < 2:\n self.group_params[int(g)] = (self.global_a, self.global_b)\n else:\n eta_g = self._dro_eta * (1.0 + 10.0 / n_g)\n a_loc, b_loc = self._fit_dro_platt(\n logits[mask], labels[mask], eta=eta_g\n )\n local_fits[int(g)] = (a_loc, b_loc, n_g)\n\n # For large groups, also fit per-group isotonic\n if n_g >= self._iso_threshold:\n iso = IsotonicRegression(\n y_min=0.0, y_max=1.0, out_of_bounds=\"clip\"\n )\n iso.fit(probs[mask], labels[mask])\n self.group_iso[int(g)] = iso\n\n # Stage 4: Empirical Bayes (James-Stein) shrinkage toward global\n if len(local_fits) >= 2:\n a_vals = np.array([v[0] for v in local_fits.values()])\n b_vals = np.array([v[1] for v in local_fits.values()])\n ns = np.array([v[2] for v in local_fits.values()])\n tau2_a = max(np.var(a_vals) - np.mean(1.0 / ns), 0.01)\n tau2_b = max(np.var(b_vals) - np.mean(1.0 / ns), 0.01)\n else:\n tau2_a = tau2_b = 1.0\n\n for g, (a_loc, b_loc, n_g) in local_fits.items():\n sigma2 = 1.0 / n_g\n shrink_a = sigma2 / (sigma2 + tau2_a)\n shrink_b = sigma2 / (sigma2 + tau2_b)\n min_s = self._shrinkage_base / (n_g + self._shrinkage_base)\n shrink_a = max(shrink_a, min_s)\n shrink_b = max(shrink_b, min_s)\n self.group_params[g] = (\n shrink_a * self.global_a + (1 - shrink_a) * a_loc,\n shrink_b * self.global_b + (1 - shrink_b) * b_loc,\n )\n\n return self\n\n def _bayesian_temp(self, logits):\n \"\"\"Slope-only posterior tempering via probit-sigmoid identity.\n\n temp(x) = sqrt(1 + pi/3 * var_a * logit^2)\n Only tempers proportionally to |logit|: moderate predictions\n stay unchanged, extreme ones are softened appropriately.\n \"\"\"\n return np.sqrt(1.0 + (np.pi / 3.0) * self._var_a * logits ** 2)\n\n def predict_proba(self, probs, groups=None):\n probs = np.asarray(probs).reshape(-1)\n logits = special.logit(np.clip(probs, self.eps, 1.0 - self.eps))\n temp = self._bayesian_temp(logits)\n\n if groups is None:\n cal = special.expit(\n (self.global_a * logits + self.global_b) / temp\n )\n return np.clip(cal, self.eps, 1.0 - self.eps)\n\n groups = np.asarray(groups).reshape(-1)\n out = np.empty_like(probs, dtype=np.float64)\n for g in np.unique(groups):\n mask = groups == g\n a, b = self.group_params.get(\n int(g), (self.global_a, self.global_b)\n )\n platt_out = special.expit(\n (a * logits[mask] + b) / temp[mask]\n )\n\n # Blend parametric Platt with non-parametric isotonic\n # for groups that had enough calibration data\n if int(g) in self.group_iso:\n iso_raw = self.group_iso[int(g)].predict(probs[mask])\n iso_out = np.clip(iso_raw, self.eps, 1.0 - self.eps)\n out[mask] = (\n (1 - self._iso_weight) * platt_out\n + self._iso_weight * iso_out\n )\n else:\n out[mask] = platt_out\n\n return np.clip(out, self.eps, 1.0 - self.eps)\n"
7}OK: Replaced lines 72..234 with 192 line(s) in scikit-learn/custom_subgroup_calibration.py. Editable range: 72–263.
[Current file: scikit-learn/custom_subgroup_calibration.py | editable: 72–263 | total: 404 lines]
72 class CalibrationMethod:
73 """Distributionally Robust Hierarchical Platt Calibration with
74 Bayesian Posterior Tempering and Adaptive Non-Parametric Blending.
... (186 more lines) ...
261 out[mask] = platt_out
262
263 return np.clip(out, self.eps, 1.0 - self.eps)[Test #3] (0 tests remaining; call submit(n=N) to choose which test result to submit as final) [NOTE] This was your last test. You MUST now call submit(n=X) to choose which test result to submit as your final answer. ### breast_cancer (scripts/breast_cancer.sh) Training progress: TRAIN_METRICS: dataset=breast_cancer cal_ece_before=0.014909 cal_ece_after=0.055114 cal_brier_before=0.001191 cal_brier_after=0.003955 Final metrics (breast_cancer): worst_group_ece_breast_cancer=0.180018, brier_breast_cancer=0.123305, subgroup_auroc_breast_cancer=0.984643, max_subgroup_gap_breast_cancer=0.123800 ### diabetes (scripts/diabetes.sh) Training progress: TRAIN_METRICS: dataset=diabetes cal_ece_before=0.141277 cal_ece_after=0.067545 cal_brier_before=0.154074 cal_brier_after=0.154115 Final metrics (diabetes): worst_group_ece_diabetes=0.143498, brier_diabetes=0.162282, subgroup_auroc_diabetes=0.765085, max_subgroup_gap_diabetes=0.063722 ### california_housing (scripts/california_housing.sh) Traini
Tool Input
1{
2 "n": 3
3}[submit] Submitting result from test #3 as final. ### breast_cancer (scripts/breast_cancer.sh) Training progress: TRAIN_METRICS: dataset=breast_cancer cal_ece_before=0.014909 cal_ece_after=0.055114 cal_brier_before=0.001191 cal_brier_after=0.003955 Final metrics (breast_cancer): worst_group_ece_breast_cancer=0.180018, brier_breast_cancer=0.123305, subgroup_auroc_breast_cancer=0.984643, max_subgroup_gap_breast_cancer=0.123800 ### diabetes (scripts/diabetes.sh) Training progress: TRAIN_METRICS: dataset=diabetes cal_ece_before=0.141277 cal_ece_after=0.067545 cal_brier_before=0.154074 cal_brier_after=0.154115 Final metrics (diabetes): worst_group_ece_diabetes=0.143498, brier_diabetes=0.162282, subgroup_auroc_diabetes=0.765085, max_subgroup_gap_diabetes=0.063722 ### california_housing (scripts/california_housing.sh) Training progress: TRAIN_METRICS: dataset=california_housing cal_ece_before=0.006742 cal_ece_after=0.010219 cal_brier_before=0.098403 cal_brier_after=0.095465 Final metrics (