Skip to main content

Overview

The Boltz CLI (boltz-api) provides a convenient command-line interface for submitting predictions, checking status, listing jobs, and downloading results.

Installation

The CLI is included with the Python SDK:
pip install boltz-api
Verify installation:
boltz-api --help

Configuration

The CLI reads configuration from:
  • $XDG_CONFIG_HOME/boltz-api/config.json (if XDG_CONFIG_HOME is set), otherwise
  • ~/.config/boltz-api/config.json
You can also override config with environment variables:
  • BOLTZ_API_KEY
  • BOLTZ_API_ENDPOINT
  • BOLTZ_SIGNUP_URL

config

Manage API credentials and settings.
# Set API key
boltz-api config --api-key "sk-..."

# Set custom API endpoint (base URL)
boltz-api config --endpoint "https://app.boltz.bio"

# Set custom signup URL (used for interactive prompts)
boltz-api config --signup-url "https://app.boltz.bio"

# View current configuration
boltz-api config --show

Options

OptionDescription
--api-key TEXTSet your Boltz Lab API key
--endpoint TEXTSet the API base URL (stored as api_endpoint)
--signup-url TEXTSet the signup URL used by interactive prompts
--showDisplay current configuration

Prediction Management

predict

Submit a prediction job from a YAML file (local or via URL). By default, the CLI waits for completion and downloads results.
# Basic usage - submit, wait, and download
boltz-api predict job.yaml

# Submit from URL
boltz-api predict https://example.com/job.yaml

# Choose output directory (default: current directory ".")
boltz-api predict job.yaml --output ./results

# Download results as JSON format
boltz-api predict job.yaml --format json

# Submit without waiting (fire-and-forget)
boltz-api predict job.yaml --no-wait

# Wait but don't download
boltz-api predict job.yaml --no-download

# With custom name and priority
boltz-api predict job.yaml --name "My Prediction" --priority high

Arguments

ArgumentDescription
YAML_FILEPath to YAML file or URL containing job specification

Options

OptionTypeDefaultDescription
--name TEXTStringNoneHuman-readable prediction name
--priority TEXTChoicelowPriority: low or high
--no-waitFlagFalseSubmit without waiting for completion (implies --no-download)
--no-downloadFlagFalseWait for completion but don’t download
--output, -o PATHPath.Output directory for downloaded results
--format, -f TEXTChoicearchiveOutput format: archive (tar.gz) or json
--polling-interval INTEGERInteger5Seconds between status checks
--timeout INTEGERIntegerNoneMaximum wait time in seconds
--api-key TEXTStringNoneOverride API key (otherwise uses config/env var)
--api-url TEXTStringNoneOverride API base URL (otherwise uses config/env var)

Prediction Flag Options

The CLI exposes additional optional model flags that are forwarded to the API:
OptionTypeDescription
--use_potentialsFlagWhether to use potentials for steering (default: False)
--recycling_steps INTEGERIntegerNumber of recycling steps (default: 3)
--diffusion_samples INTEGERIntegerNumber of diffusion samples (default: 1)
--sampling_steps INTEGERIntegerNumber of sampling steps (default: 200)
--step_scale FLOATFloatStep size / diffusion temperature scaling (default: 1.5)
--subsample_msa BOOLBoolWhether to subsample the MSA (default: True)
--num_subsampled_msa INTEGERIntegerNumber of MSA sequences to subsample (default: 1024)

status

Check the status of a prediction job.
boltz-api status <prediction-id>

Arguments

ArgumentDescription
PREDICTION_IDUnique identifier for the prediction

Options

OptionDescription
--api-key TEXTOverride API key
--api-url TEXTOverride API base URL

Output

The command prints a JSON object containing status and timestamps (fields depend on what the API returns).

list

List prediction jobs with optional filtering.
# List all predictions (default limit=20, offset=0)
boltz-api list

# Filter by status (enum values shown below)
boltz-api list --status RUNNING

# Pagination
boltz-api list --limit 10 --offset 10

Options

OptionTypeDefaultDescription
--status TEXTChoiceNoneFilter by status: PENDING, CREATED, RUNNING, COMPLETED, FAILED, CANCELLED, TIMED_OUT
--limit INTEGERInteger20Maximum number of results
--offset INTEGERInteger0Pagination offset
--api-key TEXTStringNoneOverride API key
--api-url TEXTStringNoneOverride API base URL

Output

The command prints JSON in the shape:
{
  "total": 123,
  "predictions": [
    {
      "prediction_id": "…",
      "prediction_status": "…",
      "created_at": "…"
    }
  ]
}

download

Download results for a prediction.
# Download as tar.gz archive (default)
boltz-api download <prediction-id>

# Download to specific directory (default: ".")
boltz-api download <prediction-id> --output ./results

# Download as JSON format
boltz-api download <prediction-id> --format json

# Custom filename (without extension)
boltz-api download <prediction-id> --filename my_prediction_results

Arguments

ArgumentDescription
PREDICTION_IDUnique identifier for the prediction

Options

OptionTypeDefaultDescription
--output, -o PATHPath.Output directory
--format, -f TEXTChoicearchiveFormat: archive (tar.gz) or json
--filename TEXTStringAuto-generatedCustom filename (without extension)
--api-key TEXTStringNoneOverride API key
--api-url TEXTStringNoneOverride API base URL

Global Options

These options are available for all commands:
OptionDescription
--debugEnable debug logging including HTTP requests
--helpShow help message and exit

Exit Codes

The CLI exits with:
  • 0 on success
  • 1 on error (all error cases use a general exit code)

Troubleshooting

”No API key found”

Solution: Set your API key:
boltz-api config --api-key "sk-..."
# or
export BOLTZ_API_KEY="sk-..."

“Connection refused” / “Server not found”

Solution: Check your endpoint configuration:
boltz-api config --show
boltz-api config --endpoint "https://app.boltz.bio"

“Request timeout”

Solution: Increase timeout or check network:
boltz-api predict job.yaml --timeout 1800  # 30 minutes