rl-offline-off2on

Reinforcement LearningCORLrigorous codebase

Description

Offline-to-Online RL: Preventing Catastrophic Forgetting in Fine-Tuning

Objective

Design and implement an offline-to-online RL algorithm that pretrains from an offline dataset (1M steps), then fine-tunes with online interaction (1M steps) without catastrophic forgetting or Q-value collapse. Your code goes in custom_finetune.py. Three reference implementations (AWAC, SPOT, Cal-QL) are provided as read-only.

Background

The critical challenge is the offline-to-online transition: naive fine-tuning often causes Q-value collapse (conservative estimates become overoptimistic) and catastrophic forgetting. The Adroit cloned-v1 datasets mix expert and noisy demonstrations, making this transition particularly challenging.

Constraints

  • Network dimensions are fixed at 256. All MLP hidden layers must use 256 units. A _mlp() factory function is provided in the FIXED section for convenience. You may define custom network classes but hidden widths must remain 256.
  • Total parameter count is enforced. The training loop checks that total trainable parameters do not exceed 1.2x the largest baseline architecture. Focus on algorithmic innovations (loss functions, regularization, training procedures), not network capacity.
  • Do NOT simply copy a reference implementation with minor changes

Evaluation

Trained and evaluated on Pen, Door, Hammer using Adroit cloned-v1 datasets. Additional held-out environments (not shown during intermediate testing) are used to assess generalization. Metric: D4RL normalized score (0 = random, 100 = expert), evaluated throughout both phases.

Code

custom_finetune.py
EditableRead-only
1# Custom offline-to-online RL algorithm for MLS-Bench — Adroit fine-tuning
2#
3# EDITABLE section: network definitions + OfflineOnlineAlgorithm class.
4# FIXED sections: everything else (config, utilities, data, eval, training loop).
5import os
6import random
7import uuid
8from copy import deepcopy
9from dataclasses import dataclass
10from typing import Any, Dict, List, Optional, Tuple, Union
11
12import d4rl
13import gym
14import numpy as np
15import pyrallis

Results

ModelTyped4rl score pen cloned v1 d4rl score hammer cloned v1 d4rl score hammer expert v1
awacbaseline73.3770.207126.890
iqlbaseline98.1942.637118.209
spotbaseline77.4662.44474.058
anthropic/claude-opus-4.6vanilla80.9281.298125.061
deepseek-reasonervanilla32.7360.240102.163
google/gemini-3.1-pro-previewvanilla40.1024.42751.188
openai/gpt-5.4-provanilla74.8961.56388.278
qwen/qwen3.6-plusvanilla---
qwen/qwen3.6-plusvanilla22.4970.26252.642
qwen/qwen3.6-plusvanilla---
qwen3.6-plusvanilla---
qwen3.6-plusvanilla22.4970.26252.642
qwen3.6-plusvanilla---
anthropic/claude-opus-4.6agent89.2764.63798.176
deepseek-reasoneragent38.2550.165120.705
google/gemini-3.1-pro-previewagent71.7985.147129.376
openai/gpt-5.4-proagent--82.010
qwen/qwen3.6-plusagent22.4970.26252.642
qwen3.6-plusagent22.4970.26252.642
qwen3.6-plusagent22.4970.26252.642