strongreject@parity

Safeguards

StrongReject benchmark for evaluating LLM safety and jailbreak resistance. Parity subset with 150 tasks (50 prompts * 3 jailbreaks).

← Back to Registry

Run this task

CLI:

inspect eval inspect_harbor/strongreject_parity --model openai/gpt-5

Python:

from inspect_ai import eval
from inspect_harbor import strongreject_parity

eval(strongreject_parity(), model="openai/gpt-5")

Dataset information

Harbor registry strongreject@parity
Inspect task strongreject_parity
Version parity
Samples 150
Paper arxiv

See Task Parameters for the parameter set shared across all Harbor tasks.