meta-fewshot-classification
Description
Meta-Learning: Few-Shot Image Classification
Objective
Design and implement a novel few-shot classification method. Your code goes in the CustomFewShotMethod class in custom_fewshot.py. Three reference implementations (Prototypical Networks, Matching Networks, Relation Networks) are provided as read-only.
Background
Few-shot classification aims to recognize new classes from only a handful of labeled examples (the "support set"). During evaluation, the model receives a support set of N classes with K examples each (N-way K-shot), and must classify unlabeled query images into one of these N classes. Key design choices include:
- Feature comparison: how to measure similarity between query and support features (e.g., Euclidean distance, cosine similarity, learned metrics)
- Support set encoding: how to aggregate information from support examples (e.g., prototypes, attention, graph neural networks)
- Query adaptation: how to refine query representations using support context (e.g., LSTM attention, cross-attention, transductive inference)
Model Interface
class CustomFewShotMethod(FewShotClassifier):
def __init__(self):
# Create backbone and any learnable modules
backbone = make_backbone(use_pooling=True) # ResNet-12, output: [B, 640]
super().__init__(backbone=backbone)
def process_support_set(self, support_images, support_labels):
# Extract and store support set information for later use
...
def forward(self, query_images) -> Tensor:
# Return classification scores of shape (n_query, n_way)
...
def compute_loss(self, scores, labels) -> Tensor:
# Compute training loss (default: cross-entropy)
...
Available Utilities
self.compute_features(images): extract features throughself.backboneself.l2_distance_to_prototypes(features): negative Euclidean distance toself.prototypesself.cosine_distance_to_prototypes(features): cosine similarity toself.prototypescompute_prototypes(features, labels): compute mean feature per classself.compute_prototypes_and_store_support_set(images, labels): convenience methodmake_backbone(use_pooling=True/False): create ResNet-12 (640-dim features or feature maps)
Evaluation
Trained and evaluated on three few-shot image classification benchmarks:
- miniImageNet (100 classes from ImageNet, 5-way 5-shot)
- CIFAR-FS (100 classes from CIFAR-100, 5-way 5-shot)
- CUB-200 (200 fine-grained bird species, 5-way 5-shot)
Metric: mean classification accuracy over 600 test episodes (higher is better). Episodic training with 500 tasks/epoch for 100 epochs.
Code
1# Custom few-shot classification method for MLS-Bench2#3# EDITABLE section: CustomFewShotMethod class and helper modules.4# FIXED sections: everything else (config, data loading, training loop, evaluation).5import os6import copy7import random8import json9from pathlib import Path10from statistics import mean11from typing import List, Tuple, Optional1213import numpy as np14import torch15import torch.nn as nn
Additional context files (read-only):
easy-few-shot-learning/easyfsl/methods/few_shot_classifier.pyeasy-few-shot-learning/easyfsl/methods/utils.py
Results
| Model | Type | accuracy mini imagenet ↑ | accuracy cifar fs ↑ | accuracy CUB ↑ |
|---|---|---|---|---|
| matchingnet | baseline | 0.593 | 0.557 | 0.536 |
| protonet | baseline | 0.649 | 0.682 | 0.756 |
| relationnet | baseline | 0.699 | 0.804 | 0.758 |
| anthropic/claude-opus-4.6 | vanilla | 0.723 | 0.821 | 0.679 |
| deepseek-reasoner | vanilla | 0.674 | 0.753 | 0.200 |
| google/gemini-3.1-pro-preview | vanilla | 0.729 | 0.835 | 0.743 |
| openai/gpt-5.4-pro | vanilla | 0.616 | 0.869 | 0.533 |
| qwen3.6-plus | vanilla | 0.589 | 0.698 | 0.411 |
| anthropic/claude-opus-4.6 | agent | 0.723 | 0.821 | 0.679 |
| deepseek-reasoner | agent | 0.346 | 0.464 | 0.256 |
| google/gemini-3.1-pro-preview | agent | 0.729 | 0.835 | 0.743 |
| openai/gpt-5.4-pro | agent | 0.616 | 0.869 | 0.533 |
| qwen3.6-plus | agent | 0.589 | 0.698 | 0.411 |