ml-feature-selection

Classical MLscikit-learnrigorous codebase

Description

Feature Selection Method Design

Research Question

Design a novel univariate feature scoring method that identifies the most informative features for classification, generalizing across diverse data modalities (text, vision, tabular).

Background

Feature selection is a fundamental preprocessing step in machine learning. By removing irrelevant or redundant features, it can improve model accuracy, reduce overfitting, and speed up training. Classical univariate methods score each feature independently based on its relationship with the target variable:

  • Chi-squared test: Measures departure from independence between feature and target using contingency tables. Works best with non-negative, count-like features.
  • ANOVA F-value (f_classif): Computes the ratio of between-class variance to within-class variance. Effective for normally-distributed features with different means per class.
  • Mutual Information: Estimates the mutual information between each feature and the target via k-nearest neighbors. Captures non-linear dependencies but is computationally expensive.

Each method has strengths and weaknesses depending on the data distribution. The task is to design a scoring function that performs robustly across different data types and class structures.

Task

Implement the score_features(X, y) function in custom_featsel.py. Given a training feature matrix X and integer class labels y, return a 1-D numpy array of non-negative importance scores (one per feature). The top-k features (by score) will be selected and used to train a LogisticRegression classifier.

Interface

def score_features(X: np.ndarray, y: np.ndarray) -> np.ndarray:
    """
    Args:
        X: (n_samples, n_features) non-negative float array
        y: (n_samples,) integer class labels

    Returns:
        scores: (n_features,) non-negative float array
    """

Available imports (already at top of file): numpy, scipy (via sklearn), sklearn.feature_selection (mutual_info_classif, chi2, f_classif), sklearn.preprocessing, sklearn.metrics.

Evaluation

Evaluated on three classification benchmarks spanning different data modalities:

  • 20newsgroups: 10,000 TF-IDF text features, 20 classes, top-500 selected
  • MNIST: 784 pixel intensity features, 10 digit classes, top-200 selected
  • Madelon: 500 synthetic features (20 informative + 480 noisy), binary classification, top-20 selected

Metric: test classification accuracy using LogisticRegression on the selected features (higher is better).

Code

custom_featsel.py
EditableRead-only
1# Custom feature selection method for MLS-Bench
2#
3# EDITABLE section: score_features() function.
4# FIXED sections: everything else (data loading, classifier, evaluation).
5import os
6import warnings
7import numpy as np
8from pathlib import Path
9
10from sklearn.datasets import fetch_20newsgroups, fetch_openml
11from sklearn.feature_extraction.text import TfidfVectorizer
12from sklearn.model_selection import StratifiedShuffleSplit
13from sklearn.linear_model import LogisticRegression
14from sklearn.preprocessing import StandardScaler, MinMaxScaler
15from sklearn.metrics import accuracy_score

Results

ModelTypeaccuracy 20newsgroups accuracy mnist accuracy madelon
chi2baseline0.5560.8890.594
f_classifbaseline0.5470.8980.588
mutual_infobaseline0.4680.8960.612
deepseek-reasonervanilla0.5210.8980.610
google/gemini-3.1-pro-previewvanilla0.5280.8920.604
openai/gpt-5.4vanilla0.4710.8900.613
qwen/qwen3.6-plusvanilla0.5370.8920.594
deepseek-reasoneragent0.5430.8920.637
google/gemini-3.1-pro-previewagent0.5570.8930.613
openai/gpt-5.4agent0.5530.8930.623
qwen/qwen3.6-plusagent0.5540.8930.595

Agent Conversations