researchcodebench@1.0

Coding

ResearchCodeBench evaluates AI agents’ ability to implement algorithms from academic papers. Contains 212 code implementation tasks across 20 ML/AI research problems from top-tier venues (ICLR, NeurIPS, CVPR, COLM). Tests paper comprehension, algorithm understanding, and precise code implementation skills with 1,449 lines of reference code.

← Back to Registry

Run this task

CLI:

inspect eval inspect_harbor/researchcodebench_1_0 --model openai/gpt-5

Python:

from inspect_ai import eval
from inspect_harbor import researchcodebench_1_0

eval(researchcodebench_1_0(), model="openai/gpt-5")

Dataset information

Harbor registry researchcodebench@1.0
Inspect task researchcodebench_1_0
Version 1.0
Samples 212
Paper arxiv

See Task Parameters for the parameter set shared across all Harbor tasks.