LiteLLM is a lightweight Python library for calling 100+ LLM APIs in the same format. You can point it at Lightweight using CLI flags or environment variables.
CLI Usage
litellm --model openai/claude-sonnet-4.5 \
--api_base https://api.lightweight.one/v1 \
--api_key lw_sk_your-key-here
LiteLLM requires the openai/ prefix when using OpenAI-compatible providers.
Python Usage
import litellm
response = litellm.completion(
model="openai/gpt-5.4",
api_base="https://api.lightweight.one/v1",
api_key="lw_sk_your-key-here",
messages=[
{"role": "user", "content": "What is the capital of France?"}
]
)
print(response.choices[0].message.content)
Environment Variables
Alternatively, set environment variables so you don’t need to pass credentials on every call:
Bash / macOS / Linux
PowerShell / Windows
export OPENAI_API_BASE="https://api.lightweight.one/v1"
export OPENAI_API_KEY="lw_sk_your-key-here"
$env:OPENAI_API_BASE = "https://api.lightweight.one/v1"
$env:OPENAI_API_KEY = "lw_sk_your-key-here"
Then call LiteLLM without explicit credentials:
response = litellm.completion(
model="openai/claude-sonnet-4.5",
messages=[{"role": "user", "content": "Hello!"}]
)
LiteLLM supports streaming, function calling, and all other OpenAI-compatible features through Lightweight. See the full Models catalog.