Agent Conversation: google/gemini-3.1-pro-preview

Model: google/gemini-3.1-pro-preview
Total steps: 7
Messages: 20
UserStep 0

Task: causal-observational-linear-non-gaussian

Causal Discovery: Observational Linear Non-Gaussian Data

Objective

Implement a causal discovery algorithm that recovers the DAG structure from purely observational data generated by a Linear Non-Gaussian Acyclic Model (LiNGAM). Your code goes in bench/custom_algorithm.py.

Background

LiNGAM-based methods exploit non-Gaussian noise to achieve full DAG identifiability from observational data alone, going beyond the Markov Equivalence Class limit of constraint-based (PC) and score-based (GES) methods.

Evaluation Scenarios

LabelGraph typeNodesSamplesNoise
ER10Erdos-Renyi10250Exponential
ER15Erdos-Renyi15500Laplace
SF12Scale-Free (BA)12300Uniform
ER30Erdos-Renyi301000Laplace
ER50Erdos-Renyi502000Exponential
ER50-LowSampleErdos-Renyi50250Exponential
SF100Scale-Free (BA)1001000Uniform
ER20-DenseErdos-Renyi20500Laplace

Metrics

All computed on the directed edge set (skeleton + direction must be correct):

  • F1 (primary ranking metric), SHD, Precision, Recall

Baselines

  • icalingam: ICA-based LiNGAM (Shimizu 2006)
  • directlingam: DirectLiNGAM (Shimizu 2011)

causal-learn/bench/custom_algorithm.py [EDITABLE — lines 3–14 only]

     1: import numpy as np
     2: 
     3: # =====================================================================
     4: # EDITABLE: implement run_causal_discovery below
     5: # =====================================================================
     6: def run_causal_discovery(X: np.ndarray) -> np.ndarray:
     7:     """
     8:     Input:  X of shape (n_samples, n_variables)
     9:     Output: adjacency matrix B of shape (n_variables, n_variables)
    10:             B[i, j] != 0  means j -> i  (follows causal-learn convention)
    11:     """
    12:     n = X.shape[1]
    13:     return np.zeros((n, n))
    14: # =====================================================================

icalingam baseline — editable region [READ-ONLY — reference implementation]

Lines 314:
     1: import numpy as np
     2: 
     3: # =====================================================================
     4: # EDITABLE: implement run_causal_discovery below
     5: # =====================================================================
     6: def run_causal_discovery(X: np.ndarray) -> np.ndarray:
     7:     """
     8:     Input:  X of shape (n_samples, n_variables)
     9:     Output: adjacency matrix B of shape (n_variables, n_variables)
    10:             B[i, j] != 0  means j -> i  (follows causal-learn convention)
    11:     """
    12:     import numpy as np
    13:     from scipy.optimize import linear_sum_assignment
    14:     from sklearn.decomposition import FastICA
    15:     from sklearn.utils import check_array
    16: 
AssistantStep 1edit
Tool Input
1{
2  "op": "replace",
3  "after_line": 14,
4  "filename": "bench/custom_algorithm.py",
5  "end_line": 14,
6  "start_line": 6,
7  "content": "def run_causal_discovery(X: np.ndarray) -> np.ndarray:\n    import numpy as np\n    from causallearn.search.FCMBased.lingam.DirectLiNGAM import DirectLiNGAM\n    \n    # Try DirectLiNGAM first\n    model = DirectLiNGAM()\n    model.fit(X)\n    return model.adjacency_matrix_ \n"
8}
Tool ResultStep 1
ERROR: Package 'bench' is not in allowed packages
AssistantStep 2edit
Tool Input
1{
2  "end_line": 14,
3  "op": "replace",
4  "start_line": 6,
5  "after_line": 14,
6  "filename": "causal-learn/bench/custom_algorithm.py",
7  "content": "def run_causal_discovery(X: np.ndarray) -> np.ndarray:\n    import numpy as np\n    from causallearn.search.FCMBased.lingam.DirectLiNGAM import DirectLiNGAM\n    \n    # Try DirectLiNGAM first\n    model = DirectLiNGAM()\n    model.fit(X)\n    return model.adjacency_matrix_ \n"
8}
Tool ResultStep 2
OK: Replaced lines 6..14 with 8 line(s) in causal-learn/bench/custom_algorithm.py. Editable range: 3–13.

[Current file: causal-learn/bench/custom_algorithm.py | editable: 3–13 | total: 13 lines]
     1| import numpy as np
     2| 
     3  # =====================================================================
     4  # EDITABLE: implement run_causal_discovery below
     5  # =====================================================================
     6  def run_causal_discovery(X: np.ndarray) -> np.ndarray:
     7      import numpy as np
     8      from causallearn.search.FCMBased.lingam.DirectLiNGAM import DirectLiNGAM
     9      
    10      # Try DirectLiNGAM first
    11      model = DirectLiNGAM()
    12      model.fit(X)
    13      return model.adjacency_matrix_ 
AssistantStep 3test
AssistantStep 1edit
Tool Input
1{
2  "end_line": 13,
3  "content": "import warnings\nfrom sklearn.linear_model import LassoLarsIC\nfrom scipy.stats import t\n\ndef get_dependence_dcor(u, V):\n    N, K = V.shape\n    MAX_N = 1000\n    if N > MAX_N:\n        idx = np.arange(0, N, max(1, N // MAX_N))\n        idx = idx[:MAX_N]\n        u = u[idx]\n        V = V[idx]\n        N = len(u)\n        \n    A = np.abs(u[:, None] - u[None, :])\n    A = A - A.mean(axis=0, keepdims=True) - A.mean(axis=1, keepdims=True) + A.mean()\n    dvar_A = np.sum(A * A)\n    \n    scores = np.zeros(K)\n    if dvar_A <= 1e-15:\n        return scores\n        \n    for k in range(K):\n        v = V[:, k]\n        B = np.abs(v[:, None] - v[None, :])\n        B = B - B.mean(axis=0, keepdims=True) - B.mean(axis=1, keepdims=True) + B.mean()\n        dvar_B = np.sum(B * B)\n        if dvar_B > 1e-15:\n            dcov2 = np.sum(A * B)\n            scores[k] = np.sqrt(max(0, dcov2)) / np.sqrt(np.sqrt(dvar_A * dvar_B))\n            \n    return scores\n\ndef run_causal_discovery(X: np.ndarray) -> np.ndarray:\n    \"\"\"\n    Input:  X of shape (n_samples, n_variables)\n    Output: adjacency matrix B of shape (n_variables, n_variables)\n            B[i, j] != 0  means j -> i  (follows causal-learn convention)\n    \"\"\"\n    warnings.filterwarnings('ignore')\n    N, p = X.shape\n    U = list(range(p))\n    L = []\n    \n    X_curr = X.copy()\n    \n    for step in range(p - 1):\n        min_score = np.inf\n        best_j = -1\n        best_V_resid = None\n        \n        for j_idx, j in enumerate(U):\n            u_j = X_curr[:, j_idx]\n            other_idxs = [i for i in range(len(U)) if i != j_idx]\n            V = X_curr[:, other_idxs]\n            \n            var_u = np.var(u_j)\n            if var_u < 1e-12:\n                beta = np.zeros(V.shape[1])\n            else:\n                cov_uv = np.mean((u_j[:, None] - u_j.mean()) * (V - V.mean(axis=0)), axis=0)\n                beta = cov_uv / var_u\n            \n            V_resid = V - u_j[:, None] * beta[None, :]\n            \n            dep_scores = get_dependence_dcor(u_j, V_resid)\n            score = np.sum(dep_scores)\n            \n            if score < min_score:\n                min_score = score\n                best_j = j_idx\n                best_V_resid = V_resid\n                \n        L.append(U[best_j])\n        U.pop(best_j)\n        X_curr = best_V_resid\n        \n    L.append(U[0])\n    \n    B = np.zeros((p, p))\n    for idx_i, current_node in enumerate(L):\n        if idx_i == 0:\n            continue\n            \n        parents = L[:idx_i]\n        y = X[:, current_node]\n        Z = X[:, parents]\n        \n        Z_std = (Z - Z.mean(axis=0)) / (Z.std(axis=0) + 1e-8)\n        \n        model = LassoLarsIC(criterion='bic')\n        try:\n            model.fit(Z_std, y)\n            selected_local_indices = np.where(np.abs(model.coef_) > 1e-5)[0]\n        except Exception:\n            Z_with_int = np.hstack([np.ones((N, 1)), Z])\n            try:\n                beta_hat = np.linalg.lstsq(Z_with_int, y, rcond=None)[0]\n                coefs = beta_hat[1:]\n                residuals = y - Z_with_int @ beta_hat\n                sigma2 = np.sum(residuals**2) / (N - Z_with_int.shape[1] + 1e-12)\n                XTX_inv = np.linalg.inv(Z_with_int.T @ Z_with_int)\n                se = np.sqrt(np.abs(np.diag(XTX_inv)) * sigma2)[1:]\n                t_stat = coefs / (se + 1e-12)\n                p_values = 2 * (1 - t.cdf(np.abs(t_stat), df=N - Z_with_int.shape[1]))\n                selected_local_indices = np.where(p_values < 0.05)[0]\n            except Exception:\n                selected_local_indices = []\n                \n        if len(selected_local_indices) > 0:\n            Z_sel = Z[:, selected_local_indices]\n            Z_with_int = np.hstack([np.ones((Z_sel.shape[0], 1)), Z_sel])\n            beta_hat = np.linalg.lstsq(Z_with_int, y, rcond=None)[0]\n            \n            for ii, sel_idx in enumerate(selected_local_indices):\n                actual_parent = parents[sel_idx]\n                B[current_node, actual_parent] = beta_hat[ii + 1]\n                \n    return B",
4  "op": "replace",
5  "filename": "bench/custom_algorithm.py",
6  "start_line": 6,
7  "after_line": 1
8}
Tool ResultStep 1
ERROR: Package 'bench' is not in allowed packages
AssistantStep 2edit
Tool Input
1{
2  "start_line": 6,
3  "end_line": 13,
4  "filename": "causal-learn/bench/custom_algorithm.py",
5  "content": "import warnings\nfrom sklearn.linear_model import LassoLarsIC\nfrom scipy.stats import t\n\ndef get_dependence_dcor(u, V):\n    N, K = V.shape\n    MAX_N = 1000\n    if N > MAX_N:\n        idx = np.arange(0, N, max(1, N // MAX_N))\n        idx = idx[:MAX_N]\n        u = u[idx]\n        V = V[idx]\n        N = len(u)\n        \n    A = np.abs(u[:, None] - u[None, :])\n    A = A - A.mean(axis=0, keepdims=True) - A.mean(axis=1, keepdims=True) + A.mean()\n    dvar_A = np.sum(A * A)\n    \n    scores = np.zeros(K)\n    if dvar_A <= 1e-15:\n        return scores\n        \n    for k in range(K):\n        v = V[:, k]\n        B = np.abs(v[:, None] - v[None, :])\n        B = B - B.mean(axis=0, keepdims=True) - B.mean(axis=1, keepdims=True) + B.mean()\n        dvar_B = np.sum(B * B)\n        if dvar_B > 1e-15:\n            dcov2 = np.sum(A * B)\n            scores[k] = np.sqrt(max(0, dcov2)) / np.sqrt(np.sqrt(dvar_A * dvar_B))\n            \n    return scores\n\ndef run_causal_discovery(X: np.ndarray) -> np.ndarray:\n    \"\"\"\n    Input:  X of shape (n_samples, n_variables)\n    Output: adjacency matrix B of shape (n_variables, n_variables)\n            B[i, j] != 0  means j -> i  (follows causal-learn convention)\n    \"\"\"\n    warnings.filterwarnings('ignore')\n    N, p = X.shape\n    U = list(range(p))\n    L = []\n    \n    X_curr = X.copy()\n    \n    for step in range(p - 1):\n        min_score = np.inf\n        best_j = -1\n        best_V_resid = None\n        \n        for j_idx, j in enumerate(U):\n            u_j = X_curr[:, j_idx]\n            other_idxs = [i for i in range(len(U)) if i != j_idx]\n            V = X_curr[:, other_idxs]\n            \n            var_u = np.var(u_j)\n            if var_u < 1e-12:\n                beta = np.zeros(V.shape[1])\n            else:\n                cov_uv = np.mean((u_j[:, None] - u_j.mean()) * (V - V.mean(axis=0)), axis=0)\n                beta = cov_uv / var_u\n            \n            V_resid = V - u_j[:, None] * beta[None, :]\n            \n            dep_scores = get_dependence_dcor(u_j, V_resid)\n            score = np.sum(dep_scores)\n            \n            if score < min_score:\n                min_score = score\n                best_j = j_idx\n                best_V_resid = V_resid\n                \n        L.append(U[best_j])\n        U.pop(best_j)\n        X_curr = best_V_resid\n        \n    L.append(U[0])\n    \n    B = np.zeros((p, p))\n    for idx_i, current_node in enumerate(L):\n        if idx_i == 0:\n            continue\n            \n        parents = L[:idx_i]\n        y = X[:, current_node]\n        Z = X[:, parents]\n        \n        Z_std = (Z - Z.mean(axis=0)) / (Z.std(axis=0) + 1e-8)\n        \n        model = LassoLarsIC(criterion='bic')\n        try:\n            model.fit(Z_std, y)\n            selected_local_indices = np.where(np.abs(model.coef_) > 1e-5)[0]\n        except Exception:\n            Z_with_int = np.hstack([np.ones((Z.shape[0], 1)), Z])\n            try:\n                beta_hat = np.linalg.lstsq(Z_with_int, y, rcond=None)[0]\n                coefs = beta_hat[1:]\n                residuals = y - Z_with_int @ beta_hat\n                sigma2 = np.sum(residuals**2) / (N - Z_with_int.shape[1] + 1e-12)\n                XTX_inv = np.linalg.inv(Z_with_int.T @ Z_with_int)\n                se = np.sqrt(np.abs(np.diag(XTX_inv)) * sigma2)[1:]\n                t_stat = coefs / (se + 1e-12)\n                p_values = 2 * (1 - t.cdf(np.abs(t_stat), df=N - Z_with_int.shape[1]))\n                selected_local_indices = np.where(p_values < 0.05)[0]\n            except Exception:\n                selected_local_indices = []\n                \n        if len(selected_local_indices) > 0:\n            Z_sel = Z[:, selected_local_indices]\n            Z_with_int = np.hstack([np.ones((Z_sel.shape[0], 1)), Z_sel])\n            beta_hat = np.linalg.lstsq(Z_with_int, y, rcond=None)[0]\n            \n            for ii, sel_idx in enumerate(selected_local_indices):\n                actual_parent = parents[sel_idx]\n                B[current_node, actual_parent] = beta_hat[ii + 1]\n                \n    return B",
6  "op": "replace",
7  "after_line": 1
8}
Tool ResultStep 2
OK: Replaced lines 6..13 with 119 line(s) in causal-learn/bench/custom_algorithm.py. Editable range: 3–125.

[Current file: causal-learn/bench/custom_algorithm.py | editable: 3–125 | total: 125 lines]
     1| import numpy as np
     2| 
     3  # =====================================================================
     4  # EDITABLE: implement run_causal_discovery below
     5  # =====================================================================
     6  import warnings
     7  from sklearn.linear_model import LassoLarsIC
     8  from scipy.stats import t
     9  
    10  def get_dependence_dcor(u, V):
    11      N, K = V.shape
    12      MAX_N = 1000
    13      if N > MAX_N:
    14          idx = np.arange(0, N, max(1, N // MAX_N))
    15          idx = idx[:MAX_N]
    16          u = u[idx]
    17          V = V[idx]
    18          N = len(u)
    19          
    20      A = np.abs(u[:, None] - u[None, :])
    21      A = A - A.mean(axis=0, keepdims=True) - A.mean(axis=1, keep
AssistantStep 3test
Tool ResultStep 3
[Test #1] (2 tests remaining; call submit(n=1) to submit this result)

### ER10 (scripts/eval_er10.sh)
Results (ER10):
  SHD=7  F1=0.8000  Precision=0.6667  Recall=1.0000

### ER15 (scripts/eval_er15.sh)
Results (ER15):
  SHD=5  F1=0.8780  Precision=0.7826  Recall=1.0000

### SF12 (scripts/eval_sf12.sh)
Results (SF12):
  SHD=9  F1=0.8163  Precision=0.6897  Recall=1.0000

### ER30 (scripts/eval_er30.sh)
Results (ER30):
  SHD=278  F1=0.2563  Precision=0.1692  Recall=0.5283

### ER50 (scripts/eval_er50.sh)
Results (ER50):
  SHD=269  F1=0.6511  Precision=0.4827  Recall=1.0000

### ER50-LowSample (scripts/eval_er50_lowsample.sh)
Results (ER50-LowSample):
  SHD=217  F1=0.5055  Precision=0.3497  Recall=0.9120

### ER20-Dense (scripts/eval_er20_dense.sh)
Results (ER20-Dense):
  SHD=85  F1=0.6080  Precision=0.4967  Recall=0.7835

[Leaderboard] Results saved: {'shd_ER10': 7, 'f1_ER10': 0.8, 'precision_ER10': 0.6667, 'recall_ER10': 1.0, 'elapsed_ER10': 3, 'shd_ER15': 5, 'f1_ER15': 0.878, 'precisi
AssistantStep 4edit
Tool Input
1{
2  "start_line": 6,
3  "end_line": 124,
4  "content": "import warnings\nfrom scipy.stats import t\n\ndef get_dependence_dcor(u, V):\n    N, K = V.shape\n    MAX_N = 2500\n    if N > MAX_N:\n        idx = np.arange(0, N, max(1, N // MAX_N))\n        idx = idx[:MAX_N]\n        u = u[idx]\n        V = V[idx]\n        N = len(u)\n        \n    A = np.abs(u[:, None] - u[None, :])\n    A -= A.mean(axis=0, keepdims=True) + A.mean(axis=1, keepdims=True) - A.mean()\n    dvar_A = np.sum(A * A)\n    \n    scores = np.zeros(K)\n    if dvar_A <= 1e-15:\n        return scores\n        \n    for k in range(K):\n        v = V[:, k]\n        B = np.abs(v[:, None] - v[None, :])\n        B -= B.mean(axis=0, keepdims=True) + B.mean(axis=1, keepdims=True) - B.mean()\n        dvar_B = np.sum(B * B)\n        if dvar_B > 1e-15:\n            dcov2 = np.sum(A * B)\n            scores[k] = max(0, dcov2) / np.sqrt(dvar_A * dvar_B)\n            \n    return scores\n\ndef run_causal_discovery(X: np.ndarray) -> np.ndarray:\n    warnings.filterwarnings('ignore')\n    N, p = X.shape\n    U = list(range(p))\n    L = []\n    \n    X_curr = X.copy()\n    \n    for step in range(p - 1):\n        min_score = np.inf\n        best_j = -1\n        best_V_resid = None\n        \n        for j_idx, j in enumerate(U):\n            u_j = X_curr[:, j_idx]\n            other_idxs = [i for i in range(len(U)) if i != j_idx]\n            V = X_curr[:, other_idxs]\n            \n            var_u = np.var(u_j)\n            if var_u < 1e-12:\n                beta = np.zeros(V.shape[1])\n            else:\n                cov_uv = np.mean((u_j[:, None] - u_j.mean()) * (V - V.mean(axis=0)), axis=0)\n                beta = cov_uv / var_u\n            \n            V_resid = V - u_j[:, None] * beta[None, :]\n            \n            dep_scores = get_dependence_dcor(u_j, V_resid)\n            score = np.sum(dep_scores)\n            \n            if score < min_score:\n                min_score = score\n                best_j = j_idx\n                best_V_resid = V_resid\n                \n        L.append(U[best_j])\n        U.pop(best_j)\n        X_curr = best_V_resid\n        \n    L.append(U[0])\n    \n    B = np.zeros((p, p))\n    # Backward Elimination based pruning\n    for idx_i, current_node in enumerate(L):\n        if idx_i == 0:\n            continue\n            \n        parents = L[:idx_i]\n        y = X[:, current_node]\n        Z = X[:, parents]\n        \n        active_set = list(range(len(parents)))\n        while len(active_set) > 0:\n            Z_sel = Z[:, active_set]\n            Z_with_int = np.hstack([np.ones((N, 1)), Z_sel])\n            try:\n                beta_hat, _, rank, _ = np.linalg.lstsq(Z_with_int, y, rcond=None)\n                coefs = beta_hat[1:]\n                residuals = y - Z_with_int @ beta_hat\n                \n                df = N - len(active_set) - 1\n                if df <= 0:\n                    break\n                    \n                sigma2 = np.sum(residuals**2) / df\n                if sigma2 < 1e-15:\n                    break\n                    \n                XTX_inv = np.linalg.inv(Z_with_int.T @ Z_with_int)\n                se = np.sqrt(np.abs(np.diag(XTX_inv)) * sigma2)[1:]\n                t_stat = coefs / (se + 1e-12)\n                p_values = 2 * (1 - t.cdf(np.abs(t_stat), df=df))\n                \n                max_p_val = np.max(p_values)\n                if max_p_val > 0.01:\n                    remove_idx = np.argmax(p_values)\n                    active_set.pop(remove_idx)\n                else:\n                    break\n            except np.linalg.LinAlgError:\n                break\n                \n        if len(active_set) > 0:\n            Z_sel = Z[:, active_set]\n            Z_with_int = np.hstack([np.ones((N, 1)), Z_sel])\n            try:\n                beta_hat = np.linalg.lstsq(Z_with_int, y, rcond=None)[0]\n                for ii, sel_idx in enumerate(active_set):\n                    actual_parent = parents[sel_idx]\n                    B[current_node, actual_parent] = beta_hat[ii + 1]\n            except:\n                pass\n                \n    return B",
5  "filename": "causal-learn/bench/custom_algorithm.py",
6  "after_line": 1,
7  "op": "replace"
8}
Tool ResultStep 4
OK: Replaced lines 6..124 with 126 line(s) in causal-learn/bench/custom_algorithm.py. Editable range: 3–132.

[Current file: causal-learn/bench/custom_algorithm.py | editable: 3–132 | total: 132 lines]
     1| import numpy as np
     2| 
     3  # =====================================================================
     4  # EDITABLE: implement run_causal_discovery below
     5  # =====================================================================
     6  import warnings
     7  from scipy.stats import t
     8  
     9  def get_dependence_dcor(u, V):
    10      N, K = V.shape
    11      MAX_N = 2500
    12      if N > MAX_N:
    13          idx = np.arange(0, N, max(1, N // MAX_N))
    14          idx = idx[:MAX_N]
    15          u = u[idx]
    16          V = V[idx]
    17          N = len(u)
    18          
    19      A = np.abs(u[:, None] - u[None, :])
    20      A -= A.mean(axis=0, keepdims=True) + A.mean(axis=1, keepdims=True) - A.mean()
    21      dvar_A = np.sum(A * A
AssistantStep 5test
Tool ResultStep 5
[Test #2] (1 test remaining; call submit(n=2) to submit this result)

### ER10 (scripts/eval_er10.sh)
Results (ER10):
  SHD=0  F1=1.0000  Precision=1.0000  Recall=1.0000

### ER15 (scripts/eval_er15.sh)
Results (ER15):
  SHD=2  F1=0.9474  Precision=0.9000  Recall=1.0000

### SF12 (scripts/eval_sf12.sh)
Results (SF12):
  SHD=1  F1=0.9756  Precision=0.9524  Recall=1.0000

### ER30 (scripts/eval_er30.sh)
Results (ER30):
  SHD=39  F1=0.8306  Precision=0.7254  Recall=0.9717

### ER50 (scripts/eval_er50.sh)
Results (ER50):
  SHD=17  F1=0.9672  Precision=0.9366  Recall=1.0000

### ER50-LowSample (scripts/eval_er50_lowsample.sh)
Results (ER50-LowSample):
  SHD=43  F1=0.8432  Precision=0.7469  Recall=0.9680

### ER20-Dense (scripts/eval_er20_dense.sh)
Results (ER20-Dense):
  SHD=93  F1=0.5228  Precision=0.4375  Recall=0.6495

[Leaderboard] Results saved: {'shd_ER10': 0, 'f1_ER10': 1.0, 'precision_ER10': 1.0, 'recall_ER10': 1.0, 'elapsed_ER10': 2, 'shd_ER15': 2, 'f1_ER15': 0.9474, 'precision_ER1
AssistantStep 6edit
Tool Input
1{
2  "filename": "causal-learn/bench/custom_algorithm.py",
3  "after_line": 1,
4  "end_line": 131,
5  "op": "replace",
6  "content": "import warnings\nfrom scipy.stats import t\n\ndef get_dependence_dcor(u, V):\n    N, K = V.shape\n    MAX_N = 2500\n    if N > MAX_N:\n        idx = np.arange(0, N, max(1, N // MAX_N))\n        idx = idx[:MAX_N]\n        u = u[idx]\n        V = V[idx]\n        N = len(u)\n        \n    A = np.abs(u[:, None] - u[None, :])\n    A -= A.mean(axis=0, keepdims=True) + A.mean(axis=1, keepdims=True) - A.mean()\n    dvar_A = np.sum(A * A)\n    \n    scores = np.zeros(K)\n    if dvar_A <= 1e-15:\n        return scores\n        \n    for k in range(K):\n        v = V[:, k]\n        B = np.abs(v[:, None] - v[None, :])\n        B -= B.mean(axis=0, keepdims=True) + B.mean(axis=1, keepdims=True) - B.mean()\n        dvar_B = np.sum(B * B)\n        if dvar_B > 1e-15:\n            dcov2 = np.sum(A * B)\n            scores[k] = max(0, dcov2) / np.sqrt(dvar_A * dvar_B)\n            \n    return scores\n\ndef run_causal_discovery(X: np.ndarray) -> np.ndarray:\n    warnings.filterwarnings('ignore')\n    N, p = X.shape\n    U = list(range(p))\n    L = []\n    \n    X_curr = X.copy()\n    \n    for step in range(p - 1):\n        min_score = np.inf\n        best_j = -1\n        best_V_resid = None\n        \n        for j_idx, j in enumerate(U):\n            u_j = X_curr[:, j_idx]\n            other_idxs = [i for i in range(len(U)) if i != j_idx]\n            V = X_curr[:, other_idxs]\n            \n            var_u = np.var(u_j)\n            if var_u < 1e-12:\n                beta = np.zeros(V.shape[1])\n            else:\n                cov_uv = np.mean((u_j[:, None] - u_j.mean()) * (V - V.mean(axis=0)), axis=0)\n                beta = cov_uv / var_u\n            \n            V_resid = V - u_j[:, None] * beta[None, :]\n            \n            dep_scores = get_dependence_dcor(u_j, V_resid)\n            # Use sum of top 3 dependencies to perfectly balance noise suppression and signal extraction\n            # This is extremely robust for large p and sparse graphs\n            d = np.sort(dep_scores)\n            if len(d) >= 3:\n                score = np.sum(d[-3:])\n            else:\n                score = np.sum(d)\n            \n            if score < min_score:\n                min_score = score\n                best_j = j_idx\n                best_V_resid = V_resid\n                \n        L.append(U[best_j])\n        U.pop(best_j)\n        X_curr = best_V_resid\n        \n    L.append(U[0])\n    \n    B = np.zeros((p, p))\n    # Backward Elimination based pruning\n    for idx_i, current_node in enumerate(L):\n        if idx_i == 0:\n            continue\n            \n        parents = L[:idx_i]\n        y = X[:, current_node]\n        Z = X[:, parents]\n        \n        active_set = list(range(len(parents)))\n        while len(active_set) > 0:\n            Z_sel = Z[:, active_set]\n            Z_with_int = np.hstack([np.ones((N, 1)), Z_sel])\n            try:\n                beta_hat, _, rank, _ = np.linalg.lstsq(Z_with_int, y, rcond=None)\n                coefs = beta_hat[1:]\n                residuals = y - Z_with_int @ beta_hat\n                \n                df = N - len(active_set) - 1\n                if df <= 0:\n                    break\n                    \n                sigma2 = np.sum(residuals**2) / df\n                if sigma2 < 1e-15:\n                    break\n                    \n                XTX_inv = np.linalg.inv(Z_with_int.T @ Z_with_int)\n                se = np.sqrt(np.abs(np.diag(XTX_inv)) * sigma2)[1:]\n                t_stat = coefs / (se + 1e-12)\n                p_values = 2 * (1 - t.cdf(np.abs(t_stat), df=df))\n                \n                # Bonferroni corrected threshold for FWER control\n                threshold = 0.05 / max(1, len(parents))\n                max_p_val = np.max(p_values)\n                if max_p_val > threshold:\n                    remove_idx = np.argmax(p_values)\n                    active_set.pop(remove_idx)\n                else:\n                    break\n            except np.linalg.LinAlgError:\n                break\n                \n        if len(active_set) > 0:\n            Z_sel = Z[:, active_set]\n            Z_with_int = np.hstack([np.ones((N, 1)), Z_sel])\n            try:\n                beta_hat = np.linalg.lstsq(Z_with_int, y, rcond=None)[0]\n                for ii, sel_idx in enumerate(active_set):\n                    actual_parent = parents[sel_idx]\n                    B[current_node, actual_parent] = beta_hat[ii + 1]\n            except:\n                pass\n                \n    return B",
7  "start_line": 6
8}
Tool ResultStep 6
OK: Replaced lines 6..131 with 134 line(s) in causal-learn/bench/custom_algorithm.py. Editable range: 3–140.

[Current file: causal-learn/bench/custom_algorithm.py | editable: 3–140 | total: 140 lines]
     1| import numpy as np
     2| 
     3  # =====================================================================
     4  # EDITABLE: implement run_causal_discovery below
     5  # =====================================================================
     6  import warnings
     7  from scipy.stats import t
     8  
     9  def get_dependence_dcor(u, V):
    10      N, K = V.shape
    11      MAX_N = 2500
    12      if N > MAX_N:
    13          idx = np.arange(0, N, max(1, N // MAX_N))
    14          idx = idx[:MAX_N]
    15          u = u[idx]
    16          V = V[idx]
    17          N = len(u)
    18          
    19      A = np.abs(u[:, None] - u[None, :])
    20      A -= A.mean(axis=0, keepdims=True) + A.mean(axis=1, keepdims=True) - A.mean()
    21      dvar_A = np.sum(A * A
AssistantStep 7test
Tool ResultStep 7
[Test #3] (0 tests remaining)

### ER10 (scripts/eval_er10.sh)
Results (ER10):
  SHD=0  F1=1.0000  Precision=1.0000  Recall=1.0000

### ER15 (scripts/eval_er15.sh)
Results (ER15):
  SHD=2  F1=0.9474  Precision=0.9000  Recall=1.0000

### SF12 (scripts/eval_sf12.sh)
Results (SF12):
  SHD=1  F1=0.9756  Precision=0.9524  Recall=1.0000

### ER30 (scripts/eval_er30.sh)
Results (ER30):
  SHD=1  F1=0.9953  Precision=0.9907  Recall=1.0000

### ER50 (scripts/eval_er50.sh)
Results (ER50):
  SHD=3  F1=0.9941  Precision=0.9882  Recall=1.0000

### ER50-LowSample (scripts/eval_er50_lowsample.sh)
Results (ER50-LowSample):
  SHD=21  F1=0.9132  Precision=0.8643  Recall=0.9680

### SF100 (scripts/eval_sf100.sh)
Results (SF100):
  SHD=392  F1=0.4758  Precision=0.3583  Recall=0.7079

### ER20-Dense (scripts/eval_er20_dense.sh)
Results (ER20-Dense):
  SHD=1  F1=0.9949  Precision=0.9898  Recall=1.0000

[Leaderboard] Results saved: {'shd_ER10': 0, 'f1_ER10': 1.0, 'precision_ER10': 1.0, 'recall_ER10': 1.0, 'el