code-contests@1.0
Coding
A competitive programming benchmark from DeepMind that evaluates AI agents’ ability to solve algorithmic problems, covering algorithms, data structures, and competitive programming challenges.
Run this task
CLI:
inspect eval inspect_harbor/code_contests_1_0 --model openai/gpt-5Python:
from inspect_ai import eval
from inspect_harbor import code_contests_1_0
eval(code_contests_1_0(), model="openai/gpt-5")Dataset information
| Harbor registry | code-contests@1.0 |
| Inspect task | code_contests_1_0 |
| Version | 1.0 |
| Samples | 9644 |
| Paper | arxiv |
See Task Parameters for the parameter set shared across all Harbor tasks.