scout scan

Scan transcripts and read results.

Pass a FILE which is either a Python script that contains @scanner or @scanjob decorated functions or a config file (YAML or JSON) that adheres to the ScanJobConfig schema.

Usage

scout scan [OPTIONS] COMMAND [ARGS]...

Options

Name Type Description Default
-S text One or more scanjob or scanner arguments (e.g. -S arg=value)
-T, --transcripts text One or more transcript sources (e.g. -T ./logs)
--results text Location to write scan results to. ./scans
--worklist path Transcript ids to process for each scanner (JSON or YAML file).
-V, --validation text One or more validation sets to apply for scanners (e.g. -V myscanner:deception.csv)
--model text Model used by default for llm scanners.
--model-base-url text Base URL for for model API
-M text One or more native model arguments (e.g. -M arg=value)
--model-config text YAML or JSON config file with model arguments.
--model-role text Named model role with model name or YAML/JSON config, e.g. –model-role critic=openai/gpt-4o or –model-role grader=“{model: mockllm/model, temperature: 0.5}”
--max-transcripts integer Maximum number of transcripts to scan concurrently (defaults to 25)
--max-processes integer Number of worker processes. Defaults to multiprocessing.cpu_count().
--limit integer Limit number of transcripts to scan.
--shuffle text Shuffle order of transcripts (pass a seed to make the order deterministic)
--tags text Tags to associate with this scan job (comma separated)
--metadata text Metadata to associate with this scan job (more than one –metadata argument can be specified).
--batch text Batch requests together to reduce API calls when using a model that supports batching (by default, no batching). Specify –batch to batch with default configuration, specify a batch size e.g. --batch=1000 to configure batches of 1000 requests, or pass the file path to a YAML or JSON config file with batch configuration.
--max-connections integer Maximum number of concurrent connections to Model API (defaults to max_transcripts)
--max-retries integer Maximum number of times to retry model API requests (defaults to unlimited)
--timeout integer Model API request timeout in seconds (defaults to no timeout)
--max-tokens integer The maximum number of tokens that can be generated in the completion (default is model specific)
--temperature float What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
--top-p float An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
--top-k integer Randomly sample the next word from the top_k most likely next words. Anthropic, Google, HuggingFace, and vLLM only.
--reasoning-effort choice (minimal | low | medium | high) Constrains effort on reasoning for reasoning models (defaults to medium). Open AI o-series and gpt-5 models only.
--reasoning-tokens integer Maximum number of tokens to use for reasoning. Anthropic Claude models only.
--reasoning-summary choice (concise | detailed | auto) Provide summary of reasoning steps (defaults to no summary). Use ‘auto’ to access the most detailed summarizer available for the current model. OpenAI reasoning models only.
--reasoning-history choice (none | all | last | auto) Include reasoning in chat message history sent to generate (defaults to “auto”, which uses the recommended default for each provider)
--response-schema text JSON schema for desired response format (output should still be validated). OpenAI, Google, and Mistral only.
--display choice (rich | plain | none) Set the display type (defaults to ‘rich’) rich
--log-level choice (debug | trace | http | info | warning | error | critical | notset) Set the log level (defaults to ‘warning’) warning
--debug boolean Wait to attach debugger False
--debug-port integer Port number for debugger 5678
--help boolean Show this message and exit. False

Subcommands

complete Complete a scan which is incomplete due to errors (errors are not retried).
list List the scans within the scans dir.
resume Resume a scan which is incomplete due to interruption or errors (errors are retried).

scout scan complete

Complete a scan which is incomplete due to errors (errors are not retried).

Usage

scout scan complete [OPTIONS] SCAN_LOCATION

Options

Name Type Description Default
--display choice (rich | plain | none) Set the display type (defaults to ‘rich’) rich
--log-level choice (debug | trace | http | info | warning | error | critical | notset) Set the log level (defaults to ‘warning’) warning
--debug boolean Wait to attach debugger False
--debug-port integer Port number for debugger 5678
--help boolean Show this message and exit. False

Subcommands

scout scan list

List the scans within the scans dir.

Usage

scout scan list [OPTIONS] [SCANS_DIR]

Options

Name Type Description Default
--display choice (rich | plain | none) Set the display type (defaults to ‘rich’) rich
--log-level choice (debug | trace | http | info | warning | error | critical | notset) Set the log level (defaults to ‘warning’) warning
--debug boolean Wait to attach debugger False
--debug-port integer Port number for debugger 5678
--help boolean Show this message and exit. False

Subcommands

scout scan resume

Resume a scan which is incomplete due to interruption or errors (errors are retried).

Usage

scout scan resume [OPTIONS] SCAN_LOCATION

Options

Name Type Description Default
--display choice (rich | plain | none) Set the display type (defaults to ‘rich’) rich
--log-level choice (debug | trace | http | info | warning | error | critical | notset) Set the log level (defaults to ‘warning’) warning
--debug boolean Wait to attach debugger False
--debug-port integer Port number for debugger 5678
--help boolean Show this message and exit. False

Subcommands