rexbench@1.0

Coding

A benchmark to evaluate the ability of AI agents to extend existing AI research through research experiment implementation tasks.

← Back to Registry

Run this task

CLI:

inspect eval inspect_harbor/rexbench_1_0 --model openai/gpt-5

Python:

from inspect_ai import eval
from inspect_harbor import rexbench_1_0

eval(rexbench_1_0(), model="openai/gpt-5")

Dataset information

Harbor registry rexbench@1.0
Inspect task rexbench_1_0
Version 1.0
Samples 2
Paper arxiv

See Task Parameters for the parameter set shared across all Harbor tasks.