satbench@1.0
Reasoning
SATBench is a benchmark for evaluating the logical reasoning capabilities of LLMs through logical puzzles derived from Boolean satisfiability (SAT) problems.
Run this task
CLI:
inspect eval inspect_harbor/satbench_1_0 --model openai/gpt-5Python:
from inspect_ai import eval
from inspect_harbor import satbench_1_0
eval(satbench_1_0(), model="openai/gpt-5")Dataset information
| Harbor registry | satbench@1.0 |
| Inspect task | satbench_1_0 |
| Version | 1.0 |
| Samples | 2100 |
| Paper | arxiv |
See Task Parameters for the parameter set shared across all Harbor tasks.