Agent Conversation: anthropic/claude-opus-4.6

Model: anthropic/claude-opus-4.6
Total steps: 7
Messages: 16
UserStep 0

Task: ml-ensemble-boosting

Ensemble Boosting Strategy Design

Research Question

Design a novel sample weighting and update strategy for gradient boosting that improves over standard methods (AdaBoost, gradient boosting, XGBoost-style Newton step) across classification and regression tasks.

Background

Gradient boosting builds ensembles of weak learners (decision trees) sequentially, where each new learner corrects errors made by the ensemble so far. The key design choices that differentiate boosting algorithms are:

  • Pseudo-target computation: What does each new weak learner try to predict? Options include the original labels (AdaBoost), negative gradients of the loss (gradient boosting), or Newton-step targets using second-order information (XGBoost).
  • Learner weighting: How much influence does each weak learner get? Computed from weighted error (AdaBoost), fixed at 1.0 with learning rate shrinkage (gradient boosting), or via line search / Newton optimization (XGBoost).
  • Sample reweighting: How does the distribution over training samples shift between rounds? Exponential reweighting of misclassified samples (AdaBoost) vs. uniform weights with pseudo-residual fitting (gradient methods).

These design choices interact with each other and with the loss landscape. There is room for novel strategies that combine ideas from different approaches, use adaptive schedules, or exploit problem structure.

What You Can Modify

The BoostingStrategy class (lines 147-256) in custom_boosting.py. This class has four methods:

  • init_weights(n_samples) -- Initialize sample weights (should sum to 1)
  • compute_targets(y, current_predictions, sample_weights, round_idx) -- Compute pseudo-targets for the next weak learner
  • compute_learner_weight(learner, X, y, pseudo_targets, sample_weights, round_idx) -- Compute alpha for the just-fitted learner
  • update_weights(sample_weights, learner, X, y, pseudo_targets, alpha, round_idx) -- Update sample weights for the next round

You have access to: true labels, current ensemble predictions, sample weights, the fitted learner (can call learner.predict(X)), round index, and the config dict with dataset metadata.

Available imports in the FIXED section: numpy, sklearn.tree, sklearn.metrics, sklearn.datasets, sklearn.model_selection.

Evaluation

  • Metrics:
    • Classification (breast_cancer): test_accuracy (higher is better)
    • Regression (diabetes, california_housing): test_rmse (lower is better)
  • Benchmarks (3 datasets):
    • Breast Cancer Wisconsin (classification, 569 samples, 30 features)
    • Diabetes (regression, 442 samples, 10 features)
    • California Housing (regression, 20640 samples, 8 features)
  • Training: 200 boosting rounds, max_depth=3 trees, learning_rate=0.1, 80/20 train-test split
  • Base learner: Decision tree (fixed, not editable)

scikit-learn/custom_boosting.py [EDITABLE — lines 147–256 only]

     1: """ML Ensemble Boos
AssistantStep 1edit
Tool Input
1{
2  "op": "replace",
3  "filename": "scikit-learn/custom_boosting.py",
4  "start_line": 147,
5  "end_line": 256,
6  "content": "class BoostingStrategy:\n    \"\"\"Robust Adaptive Newton Boosting (RAN-Boost).\n\n    Combines three algorithmic innovations for improved gradient boosting:\n\n    1. WINSORIZED GRADIENT TARGETS: For regression, pseudo-residuals are\n       percentile-clipped each round. Threshold adapts to dataset size\n       (pct ~ 1/sqrt(n)), giving strong robustness for small datasets\n       and nearly unclipped gradients for large ones.\n\n    2. DECAYING NEWTON REGULARIZATION: L2 penalty on per-round Newton step\n       follows quadratic decay: lam(t) = lam0*(1-t/T)^2 + eps. Heavy early\n       regularization prevents overfitting when ensemble is weak; minimal\n       late regularization enables precise fine-tuning.\n\n    3. POST-CORRECTION RESIDUAL FOCUS: Sample weights are mildly adjusted\n       based on residuals remaining AFTER each round's correction -- a\n       forward-looking reweighting that focuses tree capacity on\n       systematically underfit regions without AdaBoost.R2's feedback loops.\n\n    For classification: bounded AdaBoost (exponential reweighting with\n    weight capping for robustness to label noise).\n    \"\"\"\n\n    def __init__(self, config):\n        self.config = config\n        self.task_type = config[\"task_type\"]\n        self.n_rounds = config[\"n_rounds\"]\n        self.learning_rate = config[\"learning_rate\"]\n        self._current_preds = None\n        n_samples = config[\"n_samples\"]\n        n_features = config[\"n_features\"]\n        # Adaptive regularization: scales with problem dimension\n        self.reg_lambda_init = np.sqrt(n_features) + 0.5\n        self.reg_lambda_min = 0.1\n        # Winsorization: adapts to dataset size (stronger for small n)\n        self.clip_pct = max(0.5, min(5.0, 50.0 / np.sqrt(n_samples)))\n        # Mild residual-proportional reweighting\n        self.focus_strength = 0.2\n        # Classification weight cap (multiples of uniform weight)\n        self.weight_cap = 20.0\n\n    def init_weights(self, n_samples):\n        \"\"\"Uniform initialization.\"\"\"\n        return np.ones(n_samples) / n_samples\n\n    def _sigmoid(self, x):\n        return 1.0 / (1.0 + np.exp(-np.clip(x, -500, 500)))\n\n    def compute_targets(self, y, current_predictions, sample_weights, round_idx):\n        \"\"\"Original labels (classification) or winsorized residuals (regression).\"\"\"\n        self._current_preds = current_predictions.copy()\n        if self.task_type == \"classification\":\n            return y\n        residuals = y - current_predictions\n        lo = np.percentile(residuals, self.clip_pct)\n        hi = np.percentile(residuals, 100 - self.clip_pct)\n        return np.clip(residuals, lo, hi) if lo < hi else residuals\n\n    def compute_learner_weight(self, learner, X, y, pseudo_targets,\n                                sample_weights, round_idx):\n        \"\"\"AdaBoost alpha (classification) or regularized Newton step (regression).\"\"\"\n        preds = learner.predict(X)\n        if self.task_type == \"classification\":\n            incorrect = (preds != y).astype(float)\n            w_err = np.dot(sample_weights, incorrect) / sample_weights.sum()\n            w_err = np.clip(w_err, 1e-10, 1.0 - 1e-10)\n            alpha = self.learning_rate * 0.5 * np.log((1.0 - w_err) / w_err)\n            return max(alpha, 0.0)\n        # Quadratic decay schedule: strong regularization early, weak late\n        progress = round_idx / max(self.n_rounds - 1, 1)\n        lam = self.reg_lambda_init * (1.0 - progress) ** 2 + self.reg_lambda_min\n        num = np.sum(pseudo_targets * preds)\n        den = np.sum(preds ** 2) + lam\n        return max(num / den, 0.0)\n\n    def update_weights(self, sample_weights, learner, X, y, pseudo_targets,\n                       alpha, round_idx):\n        \"\"\"Bounded AdaBoost (classification) or residual focus (regression).\"\"\"\n        preds = learner.predict(X)\n        n = len(sample_weights)\n        if self.task_type == \"classification\":\n            # Exponential reweighting with weight capping\n            incorrect = (preds != y).astype(float)\n            sample_weights = sample_weights * np.exp(alpha * incorrect)\n            sample_weights = np.minimum(sample_weights, self.weight_cap / n)\n        else:\n            # Forward-looking: reweight by post-correction residual magnitude\n            updated = self._current_preds + alpha * self.learning_rate * preds\n            res = np.abs(y - updated)\n            scale = np.percentile(res, 95) + 1e-10\n            norm_res = np.clip(res / scale, 0.0, 2.0)\n            sample_weights = sample_weights * (1.0 + self.focus_strength * norm_res)\n        sample_weights = sample_weights / sample_weights.sum()\n        return sample_weights\n\n"
7}
Tool ResultStep 1
OK: Replaced lines 147..256 with 96 line(s) in scikit-learn/custom_boosting.py. Editable range: 147–242.

[Current file: scikit-learn/custom_boosting.py | editable: 147–242 | total: 435 lines]
   147  class BoostingStrategy:
   148      """Robust Adaptive Newton Boosting (RAN-Boost).
   149  
       ... (90 more lines) ...
   240          sample_weights = sample_weights / sample_weights.sum()
   241          return sample_weights
   242  
AssistantStep 2test
Tool ResultStep 2
[Test #1] (2 tests remaining; call submit(n=N) to choose which test result to submit as final)

### breast_cancer (scripts/breast_cancer.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=140/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=160/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=180/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=200/200 train_acc=1.0000 test_acc=0.9649
Final metrics (breast_cancer): test_accuracy_breast_cancer=0.9649

### diabetes (scripts/diabetes.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_rmse=42.5855 test_rmse=56.2079
TRAIN_METRICS: round=140/200 train_rmse=42.5831 test_rmse=56.2469
TRAIN_METRICS: round=160/200 train_rmse=42.5673 test_rmse=56.2943
TRAIN_METRICS: round=180/200 train_rmse=42.5651 test_rmse=56.2998
TRAIN_METRICS: round=200/200 train_rmse=42.5521 test_rmse=56.3064
Final metrics (diabetes): test_rmse_diabetes=56.306
AssistantStep 3edit
Tool Input
1{
2  "op": "replace",
3  "filename": "scikit-learn/custom_boosting.py",
4  "start_line": 147,
5  "end_line": 242,
6  "content": "class BoostingStrategy:\n    \"\"\"Robust Adaptive Newton Boosting (RAN-Boost).\n\n    Novel algorithmic contributions over standard gradient boosting:\n\n    1. WINSORIZED GRADIENT TARGETS: For regression, pseudo-residuals are\n       percentile-clipped each round. The clip threshold adapts to dataset\n       size (pct ~ 1/sqrt(n)), giving strong outlier robustness for small\n       noisy datasets and nearly unclipped gradients for large ones.\n       This effectively optimizes a Winsorized squared loss.\n\n    2. SIZE-ADAPTIVE DECAYING REGULARIZATION: Newton step L2 penalty adapts\n       to dataset size (stronger for small n where overfitting risk is high,\n       matching XGBoost defaults for large n) and decays linearly over\n       rounds. Heavy early regularization prevents overfitting when the\n       ensemble is weak; reduced late regularization enables fine-tuning.\n\n    3. NON-COMPOUNDING RESIDUAL FOCUS: Each round, sample weights are reset\n       to uniform and recomputed based on post-correction residual magnitudes.\n       Unlike multiplicative schemes (AdaBoost.R2) that compound exponentially\n       and can create degenerate weight distributions, this bounded scheme\n       provides fresh, stable focus on systematically underfit regions.\n       Focus strength adapts to sqrt(n) to avoid noise amplification on\n       large datasets.\n\n    For classification: standard AdaBoost (proven optimal for binary tasks\n    with exponential loss, kept unmodified).\n    \"\"\"\n\n    def __init__(self, config):\n        self.config = config\n        self.task_type = config[\"task_type\"]\n        self.n_rounds = config[\"n_rounds\"]\n        self.learning_rate = config[\"learning_rate\"]\n        self._current_preds = None\n        n_samples = config[\"n_samples\"]\n        # Winsorization percentile: stronger for small datasets\n        self.clip_pct = max(0.5, min(5.0, 50.0 / np.sqrt(n_samples)))\n        # Regularization: stronger for small datasets, decays over rounds\n        self.reg_lambda_init = max(1.0, 3.0 - n_samples / 500.0)\n        self.reg_lambda_min = 0.5\n        # Non-compounding focus: weaker for large datasets\n        self.focus_strength = min(0.3, 5.0 / np.sqrt(n_samples))\n\n    def init_weights(self, n_samples):\n        \"\"\"Uniform initialization.\"\"\"\n        return np.ones(n_samples) / n_samples\n\n    def _sigmoid(self, x):\n        return 1.0 / (1.0 + np.exp(-np.clip(x, -500, 500)))\n\n    def compute_targets(self, y, current_predictions, sample_weights, round_idx):\n        \"\"\"Original labels (classification) or winsorized residuals (regression).\"\"\"\n        self._current_preds = current_predictions.copy()\n        if self.task_type == \"classification\":\n            return y  # AdaBoost: fit on original labels\n        # Regression: winsorized residuals for robustness\n        residuals = y - current_predictions\n        lo = np.percentile(residuals, self.clip_pct)\n        hi = np.percentile(residuals, 100 - self.clip_pct)\n        return np.clip(residuals, lo, hi) if lo < hi else residuals\n\n    def compute_learner_weight(self, learner, X, y, pseudo_targets,\n                                sample_weights, round_idx):\n        \"\"\"AdaBoost alpha (classification) or Newton step with decaying\n        regularization (regression).\"\"\"\n        preds = learner.predict(X)\n        if self.task_type == \"classification\":\n            # Standard AdaBoost alpha (unchanged)\n            incorrect = (preds != y).astype(float)\n            w_err = np.dot(sample_weights, incorrect) / sample_weights.sum()\n            w_err = np.clip(w_err, 1e-10, 1.0 - 1e-10)\n            return self.learning_rate * 0.5 * np.log((1.0 - w_err) / w_err)\n        # Regression: Newton step with linearly decaying regularization\n        progress = round_idx / max(self.n_rounds - 1, 1)\n        lam = (self.reg_lambda_init * (1.0 - progress)\n               + self.reg_lambda_min * progress)\n        num = np.sum(pseudo_targets * preds)\n        den = np.sum(preds ** 2) + lam\n        return max(num / den, 0.0)\n\n    def update_weights(self, sample_weights, learner, X, y, pseudo_targets,\n                       alpha, round_idx):\n        \"\"\"Standard AdaBoost reweighting (classification) or non-compounding\n        residual focus (regression).\"\"\"\n        preds = learner.predict(X)\n        n = len(sample_weights)\n        if self.task_type == \"classification\":\n            # Standard AdaBoost exponential reweighting\n            incorrect = (preds != y).astype(float)\n            sample_weights = sample_weights * np.exp(alpha * incorrect)\n        else:\n            # Non-compounding: reset to uniform and apply fresh focus\n            # based on post-correction residual magnitude\n            updated = self._current_preds + alpha * self.learning_rate * preds\n            res = np.abs(y - updated)\n            scale = np.percentile(res, 95) + 1e-10\n            norm_res = np.clip(res / scale, 0.0, 2.0)\n            sample_weights = np.ones(n) / n  # Reset: prevents compounding\n            sample_weights = sample_weights * (1.0 + self.focus_strength * norm_res)\n        sample_weights = sample_weights / sample_weights.sum()\n        return sample_weights\n\n"
7}
Tool ResultStep 3
OK: Replaced lines 147..242 with 103 line(s) in scikit-learn/custom_boosting.py. Editable range: 147–249.

[Current file: scikit-learn/custom_boosting.py | editable: 147–249 | total: 442 lines]
   147  class BoostingStrategy:
   148      """Robust Adaptive Newton Boosting (RAN-Boost).
   149  
       ... (97 more lines) ...
   247          sample_weights = sample_weights / sample_weights.sum()
   248          return sample_weights
   249  
AssistantStep 4test
Tool ResultStep 4
[Test #2] (1 test remaining; call submit(n=N) to choose which test result to submit as final)

### breast_cancer (scripts/breast_cancer.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=140/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=160/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=180/200 train_acc=1.0000 test_acc=0.9561
TRAIN_METRICS: round=200/200 train_acc=1.0000 test_acc=0.9561
Final metrics (breast_cancer): test_accuracy_breast_cancer=0.9561

### diabetes (scripts/diabetes.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_rmse=29.9609 test_rmse=54.3982
TRAIN_METRICS: round=140/200 train_rmse=28.0520 test_rmse=54.6288
TRAIN_METRICS: round=160/200 train_rmse=25.7317 test_rmse=54.7038
TRAIN_METRICS: round=180/200 train_rmse=23.6012 test_rmse=54.9126
TRAIN_METRICS: round=200/200 train_rmse=21.8333 test_rmse=55.2096
Final metrics (diabetes): test_rmse_diabetes=55.2096
AssistantStep 5edit
Tool Input
1{
2  "op": "replace",
3  "filename": "scikit-learn/custom_boosting.py",
4  "start_line": 147,
5  "end_line": 249,
6  "content": "class BoostingStrategy:\n    \"\"\"Progressive Winsorization Newton Boosting (PWN-Boost).\n\n    Novel contributions for improved gradient boosting:\n\n    1. PROGRESSIVE WINSORIZATION: Residual pseudo-targets are percentile-\n       clipped with a threshold that grows over boosting rounds:\n         clip_pct(t) = base * (1 + growth * t/T)\n       Early rounds preserve learning signal with mild clipping; late rounds\n       clip aggressively, treating remaining residuals as increasingly noisy.\n       This provides implicit regularization analogous to a learning rate\n       schedule but operating on the target space. Base threshold adapts to\n       dataset size (~1/sqrt(n)), so large datasets get negligible clipping.\n\n    2. SIZE-ADAPTIVE NEWTON REGULARIZATION: L2 penalty on the Newton step\n       scales inversely with dataset size, providing heavier regularization\n       for small datasets where overfitting risk is highest.\n\n    3. NON-COMPOUNDING RESIDUAL FOCUS: For small datasets, sample weights\n       are reset each round and recomputed from post-correction residual\n       magnitudes. Bounded by construction (max ratio ~1.4x), this avoids\n       exponential weight explosion of multiplicative schemes while gently\n       concentrating tree capacity on systematically underfit regions.\n\n    Classification: standard AdaBoost (unchanged).\n    \"\"\"\n\n    def __init__(self, config):\n        self.config = config\n        self.task_type = config[\"task_type\"]\n        self.n_rounds = config[\"n_rounds\"]\n        self.learning_rate = config[\"learning_rate\"]\n        self._current_preds = None\n        n_samples = config[\"n_samples\"]\n        # Progressive winsorization: adapts to dataset size\n        # Large datasets get ~0 clipping; small datasets get meaningful clipping\n        self.base_clip_pct = max(0.0, min(5.0,\n                                 50.0 / np.sqrt(n_samples) - 0.3))\n        self.clip_growth = max(0.0, 3.0 - n_samples / 1000.0)\n        # Newton regularization: stronger for small datasets\n        self.reg_lambda = max(1.0, 3.0 - n_samples / 500.0)\n        # Non-compounding focus for small datasets only\n        self.use_focus = (n_samples < 2000)\n        self.focus_strength = 0.2\n\n    def init_weights(self, n_samples):\n        \"\"\"Uniform initialization.\"\"\"\n        return np.ones(n_samples) / n_samples\n\n    def compute_targets(self, y, current_predictions, sample_weights,\n                        round_idx):\n        \"\"\"Original labels (classification) or progressively winsorized\n        residuals (regression).\"\"\"\n        self._current_preds = current_predictions.copy()\n        if self.task_type == \"classification\":\n            return y\n        residuals = y - current_predictions\n        # Progressive clipping: increases over rounds\n        progress = round_idx / max(self.n_rounds - 1, 1)\n        clip_pct = min(25.0,\n                       self.base_clip_pct * (1.0 + self.clip_growth * progress))\n        if clip_pct > 0.01:\n            lo = np.percentile(residuals, clip_pct)\n            hi = np.percentile(residuals, 100.0 - clip_pct)\n            if lo < hi:\n                return np.clip(residuals, lo, hi)\n        return residuals\n\n    def compute_learner_weight(self, learner, X, y, pseudo_targets,\n                                sample_weights, round_idx):\n        \"\"\"AdaBoost alpha (classification) or Newton step (regression).\"\"\"\n        preds = learner.predict(X)\n        if self.task_type == \"classification\":\n            incorrect = (preds != y).astype(float)\n            w_err = np.dot(sample_weights, incorrect) / sample_weights.sum()\n            w_err = np.clip(w_err, 1e-10, 1.0 - 1e-10)\n            return self.learning_rate * 0.5 * np.log(\n                (1.0 - w_err) / w_err)\n        num = np.sum(pseudo_targets * preds)\n        den = np.sum(preds ** 2) + self.reg_lambda\n        return max(num / den, 0.0)\n\n    def update_weights(self, sample_weights, learner, X, y, pseudo_targets,\n                       alpha, round_idx):\n        \"\"\"AdaBoost reweighting (classification) or non-compounding residual\n        focus (regression, small datasets only).\"\"\"\n        if self.task_type == \"classification\":\n            preds = learner.predict(X)\n            incorrect = (preds != y).astype(float)\n            sample_weights = sample_weights * np.exp(alpha * incorrect)\n        elif self.use_focus:\n            preds = learner.predict(X)\n            n = len(sample_weights)\n            updated = self._current_preds + alpha * self.learning_rate * preds\n            res = np.abs(y - updated)\n            scale = np.percentile(res, 95) + 1e-10\n            norm_res = np.clip(res / scale, 0.0, 2.0)\n            sample_weights = np.ones(n) / n  # Reset: no compounding\n            sample_weights *= (1.0 + self.focus_strength * norm_res)\n        sample_weights = sample_weights / sample_weights.sum()\n        return sample_weights\n\n"
7}
Tool ResultStep 5
OK: Replaced lines 147..249 with 102 line(s) in scikit-learn/custom_boosting.py. Editable range: 147–248.

[Current file: scikit-learn/custom_boosting.py | editable: 147–248 | total: 441 lines]
   147  class BoostingStrategy:
   148      """Progressive Winsorization Newton Boosting (PWN-Boost).
   149  
       ... (96 more lines) ...
   246          sample_weights = sample_weights / sample_weights.sum()
   247          return sample_weights
   248  
AssistantStep 6test
Tool ResultStep 6
[Test #3] (0 tests remaining; call submit(n=N) to choose which test result to submit as final)

[NOTE] This was your last test. You MUST now call submit(n=X) to choose which test result to submit as your final answer.

### breast_cancer (scripts/breast_cancer.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=140/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=160/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=180/200 train_acc=1.0000 test_acc=0.9561
TRAIN_METRICS: round=200/200 train_acc=1.0000 test_acc=0.9561
Final metrics (breast_cancer): test_accuracy_breast_cancer=0.9561

### diabetes (scripts/diabetes.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_rmse=30.6932 test_rmse=54.0061
TRAIN_METRICS: round=140/200 train_rmse=29.0535 test_rmse=54.3268
TRAIN_METRICS: round=160/200 train_rmse=27.5113 test_rmse=54.6320
TRAIN_METRICS: round=180/200 train_rmse=26.1437 test_rmse=55
AssistantStep 7submit
Tool Input
1{
2  "n": 3
3}
Tool ResultStep 7
[submit] Submitting result from test #3 as final.

### breast_cancer (scripts/breast_cancer.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=140/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=160/200 train_acc=1.0000 test_acc=0.9649
TRAIN_METRICS: round=180/200 train_acc=1.0000 test_acc=0.9561
TRAIN_METRICS: round=200/200 train_acc=1.0000 test_acc=0.9561
Final metrics (breast_cancer): test_accuracy_breast_cancer=0.9561

### diabetes (scripts/diabetes.sh)
Training progress (last rounds):
TRAIN_METRICS: round=120/200 train_rmse=30.6932 test_rmse=54.0061
TRAIN_METRICS: round=140/200 train_rmse=29.0535 test_rmse=54.3268
TRAIN_METRICS: round=160/200 train_rmse=27.5113 test_rmse=54.6320
TRAIN_METRICS: round=180/200 train_rmse=26.1437 test_rmse=55.1452
TRAIN_METRICS: round=200/200 train_rmse=24.8484 test_rmse=55.1678
Final metrics (diabetes): test_rmse_diabetes=55.1678

### california_housing (scripts/california