Agent Conversation: gpt-5.4-pro

Model: gpt-5.4-pro
Total steps: 6
Messages: 13
UserStep 0

Task: quant-concept-drift

Concept Drift Adaptation in Stock Prediction on CSI300

Objective

Design and implement a stock prediction model that handles concept drift and temporal distribution shift in CSI300 stocks. Your code goes in custom_model.py. Three reference implementations (TRA, AdaRNN, LightGBM) are provided as read-only.

Evaluation

Signal quality: IC, ICIR, Rank IC. Portfolio (TopkDropout, top 50, drop 5): Annualized Return, Max Drawdown, Information Ratio. Automatic via qlib's workflow.

Evaluation uses three fixed temporal regimes on the same CSI300 universe:

  • csi300: long-horizon split ending in the 2017-2020 regime
  • csi300_shifted: shifted split with a 2016-2018 test regime
  • csi300_recent (hidden): the most recent 2019-2020 regime

This task is about temporal drift adaptation, not cross-universe transfer.

Workflow Configuration

workflow_config.yaml lines 13-26 and 32-45 are editable. This covers the model plus dataset adapter/processor configuration needed by methods like TRA. Instruments, date ranges, train/valid/test splits, and evaluation settings are fixed.

qlib/custom_model.py [EDITABLE — lines 16–103 only]

     1: # Custom stock prediction model for MLS-Bench (concept drift adaptation)
     2: #
     3: # EDITABLE section: CustomModel class with fit() and predict() methods.
     4: # FIXED sections: imports below.
     5: import numpy as np
     6: import pandas as pd
     7: import torch
     8: import torch.nn as nn
     9: import torch.nn.functional as F
    10: from qlib.model.base import Model
    11: from qlib.data.dataset import DatasetH
    12: from qlib.data.dataset.handler import DataHandlerLP
    13: 
    14: DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
    15: 
    16: # =====================================================================
    17: # EDITABLE: CustomModel — implement your stock prediction model here
    18: # =====================================================================
    19: class CustomModel(Model):
    20:     """Custom stock prediction model for concept drift adaptation.
    21: 
    22:     You must implement:
    23:         fit(dataset)    — train the model on the training data
    24:         predict(dataset, segment="test")return predictions as pd.Series
    25: 
    26:     The dataset is a qlib DatasetH with Alpha158 features (158 engineered
    27:     features per stock per day). Alpha158 computes rolling statistics over
    28:     multiple windows (5, 10, 20, 30, 60 days) from raw OHLCV data:
    29:       - Rolling mean, std, and max/min of returns and volume
    30:       - Momentum indicators (ROC at various horizons)
    31:       - K-line ratios (KLEN, KLOW, KSFT, etc.)
    32:       - Rolling correlation/covariance between price and volume (CORR, CORD)
    33:       - Volatility measures (VSTD, WVMA, residual-based RESI, RSQR)
    34:     Features are pre-normalized (RobustZScoreNorm) and NaN-filled.
    35: 
 
AssistantStep 1edit
Tool Input
1{
2  "op": "replace",
3  "filename": "qlib/workflow_config.yaml",
4  "content": "          infer_processors: []",
5  "after_line": 0,
6  "start_line": 32,
7  "end_line": 39
8}
Tool ResultStep 1
OK: Replaced lines 32..39 with 1 line(s) in qlib/workflow_config.yaml. Editable range: 13–26, 32–38.

[Current file: qlib/workflow_config.yaml | editable: 13–26 | total: 76 lines]
    10|   rel_path:
    11|     - "."           # So custom_model.py is importable via module_path
    12| 
    13  task:
    14    model:
    15      class: CustomModel
    16      module_path: custom_model
    17      kwargs: {}
    18  
    19    dataset:
    20      class: DatasetH
    21      module_path: qlib.data.dataset
    22      kwargs:
    23        handler:
    24          class: Alpha158
    25          module_path: qlib.contrib.data.handler
    26          kwargs:
    27|           start_time: "2008-01-01"
    28|           end_time: "2020-08-01"
    29|           fit_start_time: "2008-01-01"
AssistantStep 2edit
Tool Input
1{
2  "op": "replace",
3  "filename": "qlib/custom_model.py",
4  "content": "# =====================================================================\n# EDITABLE: CustomModel — implement your stock prediction model here\n# =====================================================================\nclass CustomModel(Model):\n    def __init__(self):\n        super().__init__()\n        from sklearn.linear_model import Ridge\n        self.Ridge = Ridge\n        self.halflives = (10**9, 504, 126)\n        self.alpha = 3.0\n        self.models = []\n        self.mem_states = self.mem_ic = self.state_mu = self.state_sd = None\n        self.fitted = False\n\n    def _bounds(self, index):\n        d = pd.Index(index.get_level_values(0)).values\n        if len(d) == 0:\n            return np.array([], int), np.array([], int)\n        cut = np.r_[0, np.flatnonzero(d[1:] != d[:-1]) + 1, len(d)]\n        return cut[:-1], cut[1:]\n\n    def _csz(self, x, index):\n        x = np.nan_to_num(np.asarray(x, np.float32), nan=0.0, posinf=0.0, neginf=0.0)\n        s, e = self._bounds(index)\n        for a, b in zip(s, e):\n            g = x[a:b]\n            x[a:b] = (g - g.mean(0, keepdims=True)) / (g.std(0, keepdims=True) + 1e-3)\n        return np.clip(x, -6, 6)\n\n    def _states(self, x, index):\n        s, e = self._bounds(index)\n        st = np.stack([np.nan_to_num(x[a:b].mean(0), nan=0.0) for a, b in zip(s, e)]) if len(s) else np.zeros((0, x.shape[1]), np.float32)\n        return st.astype(np.float32), s, e\n\n    def _fit_experts(self, x, y, index):\n        s, e = self._bounds(index)\n        code = np.empty(len(index), int)\n        for i, (a, b) in enumerate(zip(s, e)):\n            code[a:b] = i\n        age = code.max() - code\n        ms = []\n        for hl in self.halflives:\n            w = np.ones_like(y, float) if hl > 1e8 else np.exp(-age / hl)\n            m = self.Ridge(alpha=self.alpha, fit_intercept=False)\n            m.fit(x, y, sample_weight=w)\n            ms.append(m)\n        return ms\n\n    def _daily_corr(self, pred, y, s, e):\n        out = np.zeros(len(s), np.float32)\n        for i, (a, b) in enumerate(zip(s, e)):\n            p, t = pred[a:b], y[a:b]\n            p, t = p - p.mean(), t - t.mean()\n            out[i] = (p * t).mean() / (np.sqrt((p * p).mean() * (t * t).mean()) + 1e-6)\n        return out\n\n    def fit(self, dataset: DatasetH):\n        df_tr, df_va = dataset.prepare([\"train\", \"valid\"], col_set=[\"feature\", \"label\"], data_key=DataHandlerLP.DK_L)\n        xtr0, ytr = df_tr[\"feature\"].values, df_tr[\"label\"].values.reshape(-1)\n        xva0, yva = df_va[\"feature\"].values, df_va[\"label\"].values.reshape(-1)\n        mtr, mva = ~np.isnan(ytr), ~np.isnan(yva)\n        idxtr, idxva = df_tr.index[mtr], df_va.index[mva]\n        xtr, ytr = self._csz(xtr0[mtr].copy(), idxtr), ytr[mtr].astype(np.float32)\n        xva, yva = self._csz(xva0[mva].copy(), idxva), yva[mva].astype(np.float32)\n        gate_models = self._fit_experts(xtr, ytr, idxtr)\n        states, s, e = self._states(np.nan_to_num(xva0[mva].astype(np.float32)), idxva)\n        self.mem_ic = np.stack([self._daily_corr(m.predict(xva), yva, s, e) for m in gate_models], 1)\n        self.mem_states = states\n        self.state_mu = states.mean(0, keepdims=True)\n        self.state_sd = states.std(0, keepdims=True) + 1e-3\n        xall0 = np.vstack([xtr0[mtr], xva0[mva]])\n        yall = np.r_[ytr, yva]\n        idxall = idxtr.append(idxva)\n        self.models = self._fit_experts(self._csz(xall0.copy(), idxall), yall, idxall)\n        self.fitted = True\n\n    def predict(self, dataset: DatasetH, segment=\"test\"):\n        if not self.fitted:\n            raise ValueError(\"Model is not fitted yet!\")\n        feat = dataset.prepare(segment, col_set=\"feature\", data_key=DataHandlerLP.DK_I)\n        x0, idx = feat.values.astype(np.float32), feat.index\n        x = self._csz(x0.copy(), idx)\n        expert = np.column_stack([m.predict(x) for m in self.models]).astype(np.float32)\n        states, s, e = self._states(np.nan_to_num(x0), idx)\n        mem = (self.mem_states - self.state_mu) / self.state_sd\n        mem /= np.linalg.norm(mem, axis=1, keepdims=True) + 1e-6\n        cur = (states - self.state_mu) / self.state_sd\n        cur /= np.linalg.norm(cur, axis=1, keepdims=True) + 1e-6\n        base = self.mem_ic.mean(0)\n        out = np.zeros(len(idx), np.float32)\n        for i, (a, b) in enumerate(zip(s, e)):\n            sim = np.maximum(mem @ cur[i], 0.0) ** 2 + 1e-6\n            local = (sim[:, None] * self.mem_ic).sum(0) / sim.sum()\n            drift = 1.0 - sim.max()\n            z = 8.0 * (0.5 * base + local + drift * np.array([-0.03, 0.0, 0.03], np.float32))\n            w = np.exp(z - z.max())\n            w /= w.sum()\n            p = expert[a:b] @ w\n            out[a:b] = (p - p.mean()) / (p.std() + 1e-6)\n        return pd.Series(out, index=idx, name=\"score\")",
5  "after_line": 0,
6  "start_line": 16,
7  "end_line": 103
8}
Tool ResultStep 2
OK: Replaced lines 16..103 with 100 line(s) in qlib/custom_model.py. Editable range: 16–115.

[Current file: qlib/custom_model.py | editable: 16–115 | total: 115 lines]
    13| 
    14| DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
    15| 
    16  # =====================================================================
    17  # EDITABLE: CustomModel — implement your stock prediction model here
    18  # =====================================================================
    19  class CustomModel(Model):
    20      def __init__(self):
    21          super().__init__()
    22          from sklearn.linear_model import Ridge
    23          self.Ridge = Ridge
    24          self.halflives = (10**9, 504, 126)
    25          self.alpha = 3.0
    26          self.models = []
    27          self.mem_states = self.mem_ic = self.state_mu = self.state_sd = None
    28          self.fitted = False
    29  
    30      def _bounds(self, index):
    31          d = pd.Index(index.g
AssistantStep 3test
Tool ResultStep 3
[Test #1] (2 tests remaining; call submit(n=1) to submit this result)

### csi300 (scripts/csi300.sh)
Signal analysis (csi300):
SIGNAL_METRIC IC=0.012471
SIGNAL_METRIC ICIR=0.075429
SIGNAL_METRIC Rank_IC=0.028634
SIGNAL_METRIC Rank_ICIR=0.170885
Portfolio analysis (csi300):
PORTFOLIO_METRIC annualized_return=-0.086113
PORTFOLIO_METRIC max_drawdown=-0.360830
PORTFOLIO_METRIC information_ratio=-0.822899

### csi300_shifted (scripts/csi300_shifted.sh)
Signal analysis (csi300_shifted):
SIGNAL_METRIC IC=0.020478
SIGNAL_METRIC ICIR=0.118403
SIGNAL_METRIC Rank_IC=0.034356
SIGNAL_METRIC Rank_ICIR=0.190270
Portfolio analysis (csi300_shifted):
PORTFOLIO_METRIC annualized_return=-0.049962
PORTFOLIO_METRIC max_drawdown=-0.353752
PORTFOLIO_METRIC information_ratio=-0.456598

[Leaderboard] Results saved: {'ic_csi300': 0.012471, 'icir_csi300': 0.075429, 'rank_ic_csi300': 0.028634, 'rank_icir_csi300': 0.170885, 'annualized_return_csi300': -0.086113, 'max_drawdown_csi300': -0.36083, 'information_ratio_
AssistantStep 4edit
Tool Input
1{
2  "op": "replace",
3  "filename": "qlib/custom_model.py",
4  "content": "# =====================================================================\n# EDITABLE: CustomModel — implement your stock prediction model here\n# =====================================================================\nclass CustomModel(Model):\n    def __init__(self):\n        super().__init__()\n        self.params = {\n            \"objective\": \"mse\", \"colsample_bytree\": 0.8879, \"learning_rate\": 0.2,\n            \"subsample\": 0.8789, \"lambda_l1\": 205.6999, \"lambda_l2\": 580.9768,\n            \"max_depth\": 8, \"num_leaves\": 210, \"num_threads\": 20, \"verbosity\": -1,\n        }\n        self.halflives = (10**9, 504, 126, 42)\n        self.recency_bias = np.array([-0.03, -0.01, 0.01, 0.03], np.float32)\n        self.tail_gamma = 0.5\n        self.num_boost_round, self.early_stopping_rounds = 500, 50\n        self.models = []\n        self.mem_states = self.mem_ic = self.state_mu = self.state_sd = self.base_ic = None\n        self.fitted = False\n\n    def _bounds(self, index):\n        d = pd.Index(index.get_level_values(0)).values\n        if len(d) == 0:\n            return np.array([], int), np.array([], int)\n        cut = np.r_[0, np.flatnonzero(d[1:] != d[:-1]) + 1, len(d)]\n        return cut[:-1], cut[1:]\n\n    def _state(self, x, index):\n        x = np.nan_to_num(np.asarray(x, np.float32), nan=0.0, posinf=0.0, neginf=0.0)\n        s, e = self._bounds(index)\n        st = np.stack([np.r_[x[a:b].mean(0), x[a:b].std(0)] for a, b in zip(s, e)]) if len(s) else np.zeros((0, x.shape[1] * 2), np.float32)\n        return np.clip(st, -10, 10).astype(np.float32), s, e\n\n    def _day_norm(self, p, s, e):\n        p = np.asarray(p, np.float32).copy()\n        for a, b in zip(s, e):\n            g = p[a:b]\n            p[a:b] = (g - g.mean(0, keepdims=True)) / (g.std(0, keepdims=True) + 1e-6)\n        return p\n\n    def _rank_ic(self, p, y, s, e):\n        out = np.zeros(len(s), np.float32)\n        for i, (a, b) in enumerate(zip(s, e)):\n            rp = p[a:b].argsort().argsort().astype(np.float32)\n            rt = y[a:b].argsort().argsort().astype(np.float32)\n            rp -= rp.mean(); rt -= rt.mean()\n            out[i] = (rp * rt).mean() / (np.sqrt((rp * rp).mean() * (rt * rt).mean()) + 1e-6)\n        return out\n\n    def _fit_experts(self, x, y, index, x_val=None, y_val=None, best_rounds=None):\n        import lightgbm as lgb\n        s, e = self._bounds(index)\n        code = np.empty(len(index), int)\n        for i, (a, b) in enumerate(zip(s, e)):\n            code[a:b] = i\n        age = code.max() - code\n        tail = 1.0 + self.tail_gamma * np.abs(y)\n        models, best = [], []\n        for j, hl in enumerate(self.halflives):\n            w = tail if hl > 1e8 else tail * np.exp(-age / hl)\n            dtr = lgb.Dataset(x, label=y, weight=w, free_raw_data=False)\n            if best_rounds is None:\n                dva = lgb.Dataset(x_val, label=y_val, reference=dtr, free_raw_data=False)\n                m = lgb.train(self.params, dtr, num_boost_round=self.num_boost_round, valid_sets=[dva], callbacks=[lgb.early_stopping(self.early_stopping_rounds, verbose=False)])\n                best.append(int(m.best_iteration or self.num_boost_round))\n            else:\n                m = lgb.train(self.params, dtr, num_boost_round=int(best_rounds[j]))\n            models.append(m)\n        return models, (best if best_rounds is None else best_rounds)\n\n    def fit(self, dataset: DatasetH):\n        df_tr, df_va = dataset.prepare([\"train\", \"valid\"], col_set=[\"feature\", \"label\"], data_key=DataHandlerLP.DK_L)\n        xtr, ytr = df_tr[\"feature\"].values.astype(np.float32), df_tr[\"label\"].values.reshape(-1).astype(np.float32)\n        xva, yva = df_va[\"feature\"].values.astype(np.float32), df_va[\"label\"].values.reshape(-1).astype(np.float32)\n        mtr, mva = ~np.isnan(ytr), ~np.isnan(yva)\n        xtr, ytr, idxtr = xtr[mtr], ytr[mtr], df_tr.index[mtr]\n        xva, yva, idxva = xva[mva], yva[mva], df_va.index[mva]\n        gate_models, best = self._fit_experts(xtr, ytr, idxtr, xva, yva)\n        states, s, e = self._state(xva, idxva)\n        val_pred = self._day_norm(np.column_stack([m.predict(xva) for m in gate_models]), s, e)\n        self.mem_ic = np.stack([self._rank_ic(val_pred[:, j], yva, s, e) for j in range(val_pred.shape[1])], 1)\n        self.mem_states = states\n        self.state_mu = states.mean(0, keepdims=True)\n        self.state_sd = states.std(0, keepdims=True) + 1e-3\n        self.base_ic = self.mem_ic.mean(0)\n        xall, yall, idxall = np.vstack([xtr, xva]), np.r_[ytr, yva], idxtr.append(idxva)\n        self.models, _ = self._fit_experts(xall, yall, idxall, best_rounds=best)\n        self.fitted = True\n\n    def predict(self, dataset: DatasetH, segment=\"test\"):\n        if not self.fitted:\n            raise ValueError(\"Model is not fitted yet!\")\n        feat = dataset.prepare(segment, col_set=\"feature\", data_key=DataHandlerLP.DK_I)\n        x, idx = feat.values.astype(np.float32), feat.index\n        states, s, e = self._state(x, idx)\n        pred = self._day_norm(np.column_stack([m.predict(x) for m in self.models]), s, e)\n        mem = (self.mem_states - self.state_mu) / self.state_sd\n        cur = (states - self.state_mu) / self.state_sd\n        mem /= np.linalg.norm(mem, axis=1, keepdims=True) + 1e-6\n        cur /= np.linalg.norm(cur, axis=1, keepdims=True) + 1e-6\n        out = np.zeros(len(idx), np.float32)\n        for i, (a, b) in enumerate(zip(s, e)):\n            sim = np.maximum(mem @ cur[i], 0.0)\n            if len(sim) > 64:\n                keep = np.argpartition(sim, -64)[-64:]\n                sim, ic = sim[keep], self.mem_ic[keep]\n            else:\n                ic = self.mem_ic\n            sim = sim * sim + 1e-6\n            local_ic = (sim[:, None] * ic).sum(0) / sim.sum()\n            drift = 1.0 - np.sqrt(sim.max())\n            z = 10.0 * (0.4 * self.base_ic + local_ic + drift * self.recency_bias)\n            w = np.exp(z - z.max()); w /= w.sum()\n            out[a:b] = pred[a:b] @ w\n            out[a:b] = (out[a:b] - out[a:b].mean()) / (out[a:b].std() + 1e-6)\n        return pd.Series(out, index=idx, name=\"score\")",
5  "after_line": 0,
6  "start_line": 16,
7  "end_line": 115
8}
Tool ResultStep 4
OK: Replaced lines 16..115 with 115 line(s) in qlib/custom_model.py. Editable range: 16–130.

[Current file: qlib/custom_model.py | editable: 16–130 | total: 130 lines]
    13| 
    14| DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
    15| 
    16  # =====================================================================
    17  # EDITABLE: CustomModel — implement your stock prediction model here
    18  # =====================================================================
    19  class CustomModel(Model):
    20      def __init__(self):
    21          super().__init__()
    22          self.params = {
    23              "objective": "mse", "colsample_bytree": 0.8879, "learning_rate": 0.2,
    24              "subsample": 0.8789, "lambda_l1": 205.6999, "lambda_l2": 580.9768,
    25              "max_depth": 8, "num_leaves": 210, "num_threads": 20, "verbosity": -1,
    26          }
    27          self.halflives = (10**9, 504, 126, 42)
    28          self.recency_bias = np
AssistantStep 5test
Tool ResultStep 5
[Test #2] (1 test remaining; call submit(n=2) to submit this result)

### csi300 (scripts/csi300.sh)
Signal analysis (csi300):
SIGNAL_METRIC IC=0.029213
SIGNAL_METRIC ICIR=0.197607
SIGNAL_METRIC Rank_IC=0.050610
SIGNAL_METRIC Rank_ICIR=0.367592
Portfolio analysis (csi300):
PORTFOLIO_METRIC annualized_return=-0.010686
PORTFOLIO_METRIC max_drawdown=-0.133923
PORTFOLIO_METRIC information_ratio=-0.149388

### csi300_shifted (scripts/csi300_shifted.sh)
Signal analysis (csi300_shifted):
SIGNAL_METRIC IC=0.049605
SIGNAL_METRIC ICIR=0.344048
SIGNAL_METRIC Rank_IC=0.067318
SIGNAL_METRIC Rank_ICIR=0.501053
Portfolio analysis (csi300_shifted):
PORTFOLIO_METRIC annualized_return=0.167210
PORTFOLIO_METRIC max_drawdown=-0.039962
PORTFOLIO_METRIC information_ratio=2.152695

[Leaderboard] Results saved: {'ic_csi300': 0.029213, 'icir_csi300': 0.197607, 'rank_ic_csi300': 0.05061, 'rank_icir_csi300': 0.367592, 'annualized_return_csi300': -0.010686, 'max_drawdown_csi300': -0.133923, 'information_ratio_csi
AssistantStep 6submit
Tool Input
1{
2  "n": 2
3}
Tool ResultStep 6
[submit] Submitting result from test #2 as final.

### csi300 (scripts/csi300.sh)
Signal analysis (csi300):
SIGNAL_METRIC IC=0.029213
SIGNAL_METRIC ICIR=0.197607
SIGNAL_METRIC Rank_IC=0.050610
SIGNAL_METRIC Rank_ICIR=0.367592
Portfolio analysis (csi300):
PORTFOLIO_METRIC annualized_return=-0.010686
PORTFOLIO_METRIC max_drawdown=-0.133923
PORTFOLIO_METRIC information_ratio=-0.149388

### csi300_shifted (scripts/csi300_shifted.sh)
Signal analysis (csi300_shifted):
SIGNAL_METRIC IC=0.049605
SIGNAL_METRIC ICIR=0.344048
SIGNAL_METRIC Rank_IC=0.067318
SIGNAL_METRIC Rank_ICIR=0.501053
Portfolio analysis (csi300_shifted):
PORTFOLIO_METRIC annualized_return=0.167210
PORTFOLIO_METRIC max_drawdown=-0.039962
PORTFOLIO_METRIC information_ratio=2.152695

### csi300_recent (scripts/csi300_recent.sh)
Signal analysis (csi300_recent):
SIGNAL_METRIC IC=0.019905
SIGNAL_METRIC ICIR=0.147652
SIGNAL_METRIC Rank_IC=0.043928
SIGNAL_METRIC Rank_ICIR=0.312804
Portfolio analysis (csi300_recent):
PORTFOLIO_METRIC