Skip to main content
Lightweight API is an AI gateway that provides unified access to multiple AI providers through a single endpoint. By configuring the Codex CLI to route through the gateway, you get one API key that works across all supported models — no need to manage separate provider credentials.
Prerequisites: Codex CLI installed and a Lightweight API key.

Setup

1

Install Codex CLI

Follow the official OpenAI Codex documentation to install the CLI on your system.
2

Create config.toml

Create the Codex configuration directory and file:
mkdir -p ~/.codex
Add the following to ~/.codex/config.toml:
model = "gpt-5.4"
model_provider = "lightweight"

[model_providers.lightweight]
name = "Lightweight API"
base_url = "https://api.lightweight.one/v1"
env_key = "LIGHTWEIGHT_API_KEY"
wire_api = "responses"
3

Set your API key

export LIGHTWEIGHT_API_KEY="your-api-key-here"
Add this line to ~/.bashrc or ~/.zshrc to persist across sessions.
4

Verify connection

Run the following command to confirm the gateway is reachable:
curl -s https://api.lightweight.one/v1/models \
  -H "Authorization: Bearer $LIGHTWEIGHT_API_KEY" | head -c 200
You should see a JSON response containing model objects. Once verified, run codex in a terminal to start using it.

Alternative Configuration

If you encounter issues with the model_providers approach, use this simpler configuration instead:
openai_base_url = "https://api.lightweight.one/v1"
model = "gpt-5.4"
This bypasses the model_providers mechanism entirely and routes all requests through the gateway.
Known issue: The VS Code extension and some CLI versions may not respect custom model_providers (#4558). If you experience unexpected behavior, switch to the openai_base_url approach above.

Available Models

ModelContextDescriptionBest For
gpt-5.4400KLatest GPT model with reasoning + visionGeneral coding tasks (default)
gpt-5.4-mini400KSmaller, faster GPT modelQuick edits, simple tasks
gpt-5.3-codex400KSpecialized for code, 128K outputLarge refactors, code generation
o3200KReasoning model with chain-of-thoughtComplex problem solving
o4-mini200KCompact reasoning modelBalanced reasoning + speed
claude-opus-4.61MLarge context model with nuanced reasoningLong-context, nuanced tasks
claude-sonnet-4.61MFast, efficient modelQuick responses
Model aliases like codex, gpt-codex, and gpt5.4 also work. The gateway resolves common aliases automatically.

Troubleshooting

  • “Model not found” errors: Check that the model name matches one from the table above. The gateway has alias resolution but completely invalid names return 404.
  • Connection refused: Verify base_url ends with /v1 and the URL is https://api.lightweight.one/v1.
  • 429 rate limit: The gateway applies per-user rate limiting. Wait and retry, or contact your administrator.
  • Authentication errors: Verify the LIGHTWEIGHT_API_KEY environment variable is set and the key is valid.

Preview Environment

For testing, use the preview gateway. Replace the URL in config.toml:
base_url = "https://preview.api.lightweight.one/v1"