Documentation Index
Fetch the complete documentation index at: https://allhandsai-promote-sdk-public-exports-2444.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
OpenAI subscription is the first provider we support. More subscription providers will be added in future releases.
A ready-to-run example is available here!Use your existing ChatGPT Plus or Pro subscription to access OpenAI’s Codex models without consuming API credits. The SDK handles OAuth authentication, credential caching, and automatic token refresh.
How It Works
Call subscription_login()
TheLLM.subscription_login() class method handles the entire authentication flow:~/.openhands/auth/ for future use.Use the LLM
Once authenticated, use the LLM with your agent as usual. The SDK automatically refreshes tokens when they expire.Supported Models
The following models are available via ChatGPT subscription:| Model | Description |
|---|---|
gpt-5.2-codex | Latest Codex model (default) |
gpt-5.2 | GPT-5.2 base model |
gpt-5.1-codex-max | High-capacity Codex model |
gpt-5.1-codex-mini | Lightweight Codex model |
Configuration Options
Force Fresh Login
If your cached credentials become stale or you want to switch accounts:Disable Browser Auto-Open
For headless environments or when you prefer to manually open the URL:Check Subscription Mode
Verify that the LLM is using subscription-based authentication:Credential Storage
Credentials are stored securely in~/.openhands/auth/. To clear cached credentials and force a fresh login, delete the files in this directory.
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/35_subscription_login.py
examples/01_standalone_sdk/35_subscription_login.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Next Steps
- LLM Registry - Manage multiple LLM configurations
- LLM Streaming - Stream responses token-by-token
- LLM Reasoning - Access model reasoning traces

