deveval@1.0
Coding
DevEval benchmark: comprehensive evaluation of LLMs across software development lifecycle (implementation, unit testing, acceptance testing) for 21 real-world repositories across Python, C++, Java, and JavaScript.
Run this task
CLI:
inspect eval inspect_harbor/deveval_1_0 --model openai/gpt-5Python:
from inspect_ai import eval
from inspect_harbor import deveval_1_0
eval(deveval_1_0(), model="openai/gpt-5")Dataset information
| Harbor registry | deveval@1.0 |
| Inspect task | deveval_1_0 |
| Version | 1.0 |
| Samples | 63 |
| Paper | arxiv |
See Task Parameters for the parameter set shared across all Harbor tasks.