aime@1.0

Mathematics

American Invitational Mathematics Examination (AIME) benchmark for evaluating mathematical reasoning and problem-solving capabilities. Contains 60 competition-level mathematics problems from AIME 2024, 2025-I, and 2025-II competitions.

← Back to Registry

Run this task

CLI:

inspect eval inspect_harbor/aime_1_0 --model openai/gpt-5

Python:

from inspect_ai import eval
from inspect_harbor import aime_1_0

eval(aime_1_0(), model="openai/gpt-5")

Dataset information

Harbor registry aime@1.0
Inspect task aime_1_0
Version 1.0
Samples 60

See Task Parameters for the parameter set shared across all Harbor tasks.