Documentation Index
Fetch the complete documentation index at: https://docs.boltz.bio/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Boltz CLI (boltz-lab) provides a convenient command-line interface for submitting predictions, checking status, listing jobs, and downloading results.
Installation
The CLI is included with the Python SDK:
Verify installation:
Configuration
The CLI reads configuration from:
$XDG_CONFIG_HOME/boltz-lab/config.json (if XDG_CONFIG_HOME is set), otherwise
~/.config/boltz-lab/config.json
You can also override config with environment variables:
BOLTZ_API_KEY
BOLTZ_API_ENDPOINT
BOLTZ_SIGNUP_URL
config
Manage API credentials and settings.
# Set API key
boltz-lab config --api-key "sk-..."
# Set custom API endpoint (base URL)
boltz-lab config --endpoint "https://app.boltz.bio"
# Set custom signup URL (used for interactive prompts)
boltz-lab config --signup-url "https://app.boltz.bio"
# View current configuration
boltz-lab config --show
Options
| Option | Description |
|---|
--api-key TEXT | Set your Boltz Lab API key |
--endpoint TEXT | Set the API base URL (stored as api_endpoint) |
--signup-url TEXT | Set the signup URL used by interactive prompts |
--show | Display current configuration |
Prediction Management
predict
Submit a prediction job from a YAML file (local or via URL). By default, the CLI waits for completion and downloads results.
# Basic usage - submit, wait, and download
boltz-lab predict job.yaml
# Submit from URL
boltz-lab predict https://example.com/job.yaml
# Choose output directory (default: current directory ".")
boltz-lab predict job.yaml --output ./results
# Download results as JSON format
boltz-lab predict job.yaml --format json
# Submit without waiting (fire-and-forget)
boltz-lab predict job.yaml --no-wait
# Wait but don't download
boltz-lab predict job.yaml --no-download
Arguments
| Argument | Description |
|---|
YAML_FILE | Path to YAML file or URL containing job specification |
Options
| Option | Type | Default | Description |
|---|
--name TEXT | String | None | Human-readable prediction name |
--no-wait | Flag | False | Submit without waiting for completion (implies --no-download) |
--no-download | Flag | False | Wait for completion but don’t download |
--output, -o PATH | Path | . | Output directory for downloaded results |
--format, -f TEXT | Choice | archive | Output format: archive (tar.gz) or json |
--polling-interval INTEGER | Integer | 5 | Seconds between status checks |
--timeout INTEGER | Integer | None | Maximum wait time in seconds |
--api-key TEXT | String | None | Override API key (otherwise uses config/env var) |
--api-url TEXT | String | None | Override API base URL (otherwise uses config/env var) |
Prediction Flag Options
The CLI exposes additional optional model flags that are forwarded to the API:
| Option | Type | Description |
|---|
--use_potentials | Flag | Whether to use potentials for steering (default: False) |
--recycling_steps INTEGER | Integer | Number of recycling steps (default: 3) |
--diffusion_samples INTEGER | Integer | Number of diffusion samples (default: 1) |
--sampling_steps INTEGER | Integer | Number of sampling steps (default: 200) |
--step_scale FLOAT | Float | Step size / diffusion temperature scaling (default: 1.5) |
--subsample_msa BOOL | Bool | Whether to subsample the MSA (default: True) |
--num_subsampled_msa INTEGER | Integer | Number of MSA sequences to subsample (default: 1024) |
status
Check the status of a prediction job.
boltz-lab status <prediction-id>
Arguments
| Argument | Description |
|---|
PREDICTION_ID | Unique identifier for the prediction |
Options
| Option | Description |
|---|
--api-key TEXT | Override API key |
--api-url TEXT | Override API base URL |
Output
The command prints a JSON object containing status and timestamps (fields depend on what the API returns).
list
List prediction jobs with optional filtering.
# List all predictions (default limit=20, offset=0)
boltz-lab list
# Filter by status (enum values shown below)
boltz-lab list --status RUNNING
# Pagination
boltz-lab list --limit 10 --offset 10
Options
| Option | Type | Default | Description |
|---|
--status TEXT | Choice | None | Filter by status: PENDING, CREATED, RUNNING, COMPLETED, FAILED, CANCELLED, TIMED_OUT |
--limit INTEGER | Integer | 20 | Maximum number of results |
--offset INTEGER | Integer | 0 | Pagination offset |
--api-key TEXT | String | None | Override API key |
--api-url TEXT | String | None | Override API base URL |
Output
The command prints JSON in the shape:
{
"total": 123,
"predictions": [
{
"prediction_id": "…",
"prediction_status": "…",
"created_at": "…"
}
]
}
download
Download results for a prediction.
# Download as tar.gz archive (default)
boltz-lab download <prediction-id>
# Download to specific directory (default: ".")
boltz-lab download <prediction-id> --output ./results
# Download as JSON format
boltz-lab download <prediction-id> --format json
# Custom filename (without extension)
boltz-lab download <prediction-id> --filename my_prediction_results
Arguments
| Argument | Description |
|---|
PREDICTION_ID | Unique identifier for the prediction |
Options
| Option | Type | Default | Description |
|---|
--output, -o PATH | Path | . | Output directory |
--format, -f TEXT | Choice | archive | Format: archive (tar.gz) or json |
--filename TEXT | String | Auto-generated | Custom filename (without extension) |
--api-key TEXT | String | None | Override API key |
--api-url TEXT | String | None | Override API base URL |
Global Options
These options are available for all commands:
| Option | Description |
|---|
--debug | Enable debug logging including HTTP requests |
--help | Show help message and exit |
Exit Codes
The CLI exits with:
0 on success
1 on error (all error cases use a general exit code)
Troubleshooting
”No API key found”
Solution: Set your API key:
boltz-lab config --api-key "sk-..."
# or
export BOLTZ_API_KEY="sk-..."
“Connection refused” / “Server not found”
Solution: Check your endpoint configuration:
boltz-lab config --show
boltz-lab config --endpoint "https://app.boltz.bio"
“Request timeout”
Solution: Increase timeout or check network:
boltz-lab predict job.yaml --timeout 1800 # 30 minutes