The IDE and CLI share
~/.codex/config.toml. If you completed the CLI setup, the IDE is already configured.Prerequisites
- Codex VS Code extension — install from the VS Code Marketplace
- Lightweight API key — obtain a key from your gateway administrator
- CLI setup completed — follow the Codex CLI guide first to create
config.toml
Known Issue
VS Code Settings
Optional settings available in VS Code (Settings > Extensions > Codex):chat.fontSize— Controls chat text sizechat.editor.fontSize— Controls code snippet text sizechatgpt.openOnStartup— Auto-focus Codex sidebar on launchchatgpt.runCodexInWindowsSubsystemForLinux— Enable WSL mode on Windows
config.toml.
Verify It Works
Alternative: curl verification
If the IDE is not available, verify the gateway accepts Codex-format requests directly:response.created, response.output_text.delta, response.completed).
Platform Notes
Troubleshooting
- Extension sends wrong model ID: The gateway has broad alias resolution. Common variants like
gpt-5-codex,gpt5.4,gpt-54all resolve correctly. If you see 404 errors, check the exact model ID in VS Code’s Output panel (Codex channel) and verify it is in the gateway’s model list. - Model picker doesn’t show gateway models (#6963): Try adding a local model catalog. Run
curl -s https://api.lightweight.one/v1/models -H "Authorization: Bearer $LIGHTWEIGHT_API_KEY" > ~/.codex/gateway-models.jsonthen addmodel_catalog_json = "~/.codex/gateway-models.json"toconfig.toml. - Cloud tasks bypass gateway: Tasks delegated to “Codex Cloud” connect directly to OpenAI, not through the gateway. Use local/agent mode for gateway-routed tasks.