Prerequisites: Codex CLI installed and a Lightweight API key.
Setup
Install Codex CLI
Follow the official OpenAI Codex documentation to install the CLI on your system.
Create config.toml
Create the Codex configuration directory and file:Add the following to
~/.codex/config.toml:Set your API key
- Bash / macOS / Linux
- PowerShell / Windows
~/.bashrc or ~/.zshrc to persist across sessions.Alternative Configuration
If you encounter issues with themodel_providers approach, use this simpler configuration instead:
model_providers mechanism entirely and routes all requests through the gateway.
Available Models
| Model | Context | Description | Best For |
|---|---|---|---|
gpt-5.4 | 400K | Latest GPT model with reasoning + vision | General coding tasks (default) |
gpt-5.4-mini | 400K | Smaller, faster GPT model | Quick edits, simple tasks |
gpt-5.3-codex | 400K | Specialized for code, 128K output | Large refactors, code generation |
o3 | 200K | Reasoning model with chain-of-thought | Complex problem solving |
o4-mini | 200K | Compact reasoning model | Balanced reasoning + speed |
claude-opus-4.6 | 1M | Large context model with nuanced reasoning | Long-context, nuanced tasks |
claude-sonnet-4.6 | 1M | Fast, efficient model | Quick responses |
Troubleshooting
- “Model not found” errors: Check that the model name matches one from the table above. The gateway has alias resolution but completely invalid names return 404.
- Connection refused: Verify
base_urlends with/v1and the URL ishttps://api.lightweight.one/v1. - 429 rate limit: The gateway applies per-user rate limiting. Wait and retry, or contact your administrator.
- Authentication errors: Verify the
LIGHTWEIGHT_API_KEYenvironment variable is set and the key is valid.