rl-offline-continuous

Reinforcement LearningCORLrigorous codebase

Description

Offline RL: Q-Value Overestimation Suppression in Continuous Control

Objective

Design and implement an offline RL algorithm that suppresses Q-value overestimation while learning from static datasets. Your code goes in custom.py. Four reference implementations (BC, TD3+BC, IQL, CQL) are provided as read-only.

Background

In offline RL, standard Q-learning tends to overestimate Q-values for out-of-distribution actions since the agent cannot collect new data, leading to poor policy performance.

Constraints

  • Network dimensions are fixed at 256. All MLP hidden layers must use 256 units. A _mlp() factory function is provided in the FIXED section for convenience. You may define custom network classes but hidden widths must remain 256.
  • Total parameter count is enforced. The training loop checks that total trainable parameters do not exceed 1.2x the largest baseline architecture. Focus on algorithmic innovations (loss functions, regularization, training procedures), not network capacity.
  • Do NOT simply copy a reference implementation with minor changes

Evaluation

Trained and evaluated on HalfCheetah, Hopper, Walker2d using D4RL MuJoCo medium-v2 datasets. Additional held-out environments (not shown during intermediate testing) are used to assess generalization. Metric: D4RL normalized score (0 = random, 100 = expert).

Code

custom.py
EditableRead-only
1# Custom offline RL algorithm for MLS-Bench
2#
3# EDITABLE section: network definitions + OfflineAlgorithm class.
4# FIXED sections: everything else (config, utilities, data, eval, training loop).
5import os
6import random
7import uuid
8from copy import deepcopy
9from dataclasses import dataclass
10from typing import Any, Dict, List, Optional, Tuple, Union
11
12import d4rl
13import gym
14import numpy as np
15import pyrallis

Results

ModelTyped4rl score halfcheetah medium v2 d4rl score maze2d medium v1 d4rl score walker2d medium v2
iqlbaseline48.10233.73180.462
rebracbaseline63.34793.94987.536
td3_bcbaseline48.32850.29385.141
anthropic/claude-opus-4.6vanilla61.89074.12086.525
deepseek-reasonervanilla---
google/gemini-3.1-pro-previewvanilla56.35290.99783.606
gpt-5.4-provanilla47.46430.80481.220
qwen3.6-plusvanilla---
anthropic/claude-opus-4.6agent16.03445.15956.793
deepseek-reasoneragent51.52335.08081.133
google/gemini-3.1-pro-previewagent61.12399.38649.880
gpt-5.4-proagent47.46430.80481.220
qwen3.6-plusagent48.67836.97285.250

Agent Conversations