Provider Commands
Manage LLM providers and their configurations. Providers are API endpoints that serve language models.
Command: lc providers
(alias: lc p
)
Subcommands
Add Provider
Add a new provider to your configuration.
lc providers add <name> <endpoint> [OPTIONS]
lc p a <name> <endpoint> [OPTIONS]
Options:
-m, --models-path <PATH>
- Custom models endpoint (default:/models
)-c, --chat-path <PATH>
- Custom chat endpoint (default:/chat/completions
)
Examples:
# Standard OpenAI-compatible provider
lc providers add openai https://api.openai.com/v1
# Provider with custom endpoints
lc providers add github https://models.github.ai \
--models-path /catalog/models \
--chat-path /inference/chat/completions
# Short form
lc p a together https://api.together.xyz/v1
List Providers
Show all configured providers.
lc providers list
lc p l
Output example:
Configured providers:
• openai (https://api.openai.com/v1) [key set]
• claude (https://api.anthropic.com/v1) [key set]
• together (https://api.together.xyz/v1) [no key]
List Models
Show available models from a specific provider.
lc providers models <provider>
lc p m <provider>
Example:
lc providers models openai
# Output:
# Available models for openai:
# • gpt-4-turbo-preview
# • gpt-4
# • gpt-3.5-turbo
# • text-embedding-3-small
# • text-embedding-3-large
Update Provider
Update a provider's endpoint URL.
lc providers update <name> <endpoint>
lc p u <name> <endpoint>
Example:
lc providers update openai https://api.openai.com/v1
Manage Headers
Add, list, or delete custom headers for providers.
Add Header
lc providers headers <provider> add <header> <value>
lc p h <provider> a <header> <value>
List Headers
lc providers headers <provider> list
lc p h <provider> l
Delete Header
lc providers headers <provider> delete <header>
lc p h <provider> d <header>
Set Token URL
Configure token URLs for providers requiring different endpoints for token handling.
lc providers token-url <provider> <url>
lc p t <provider> <url>
Remove Provider
Remove a provider from your configuration.
lc providers remove <name>
lc p r <name>
Example:
lc providers remove old-provider
Example: Token URL Setup
# Set a custom token URL for a provider
lc providers token-url custom-provider https://api.custom.com/auth/token
Custom Headers
Some providers require additional headers beyond the standard Authorization header.
Example: Anthropic Claude Setup
# Add Claude provider
lc providers add claude https://api.anthropic.com/v1 -c /messages
# Add required headers
lc providers headers claude add x-api-key sk-ant-api03-...
lc providers headers claude add anthropic-version 2023-06-01
# Verify headers
lc providers headers claude list
Common Provider Configurations
OpenAI
lc providers add openai https://api.openai.com/v1
lc keys add openai
Anthropic Claude
lc providers add claude https://api.anthropic.com/v1 -c /messages
lc providers headers claude add x-api-key <your-key>
lc providers headers claude add anthropic-version 2023-06-01
OpenRouter
lc providers add openrouter https://openrouter.ai/api/v1
lc keys add openrouter
Together AI
lc providers add together https://api.together.xyz/v1
lc keys add together
GitHub Models
lc providers add github https://models.github.ai \
-m /catalog/models \
-c /inference/chat/completions
lc keys add github
Local Ollama
lc providers add ollama http://localhost:11434/v1
# No API key needed for local providers
Hugging Face Router
lc providers add hf https://router.huggingface.co/v1
lc keys add hf
Google Vertex AI (Service Account JWT)
Vertex AI on Google Cloud uses OAuth 2.0 with a Service Account (SA). lc supports first-class auth using the JWT Bearer flow with automatic token mint/refresh and path templating for project/location/model
.
Quickstart
# 1) Add Vertex AI provider (endpoint auto-detects google_sa_jwt)
lc providers add vertex_google https://aiplatform.googleapis.com \
-c /v1/projects/{project}/locations/{location}/publishers/google/models/{model}:generateContent
# 2) Provide project/location via provider vars
lc providers vars vertex_google set project <your-project-id>
lc providers vars vertex_google set location <your-location> # e.g., us-central1 or global
# 3) Add Service Account JSON (paste as base64; stored encrypted)
lc keys add vertex_google
# When prompted, paste the base64 version: cat sa.json | base64
# 4) (Optional) Override token URL (defaults to https://oauth2.googleapis.com/token)
lc providers token-url vertex_google https://oauth2.googleapis.com/token
# 5) Use a Vertex model
lc -p vertex_google -m gemini-2.5-pro "Hello from Vertex"
Notes
- Service Account JSON must minimally include:
- type=service_account
- client_email
- private_key
- lc mints an RS256-signed JWT with claims:
- iss=sub=client_email, aud=token_url, scope=https://www.googleapis.com/auth/cloud-platform
- iat, exp (~1 hour)
- lc exchanges the assertion at the token URL for an access_token, caches it with a safety skew, and automatically refreshes when needed.
- The chat path templates:
{project}
,{location}
from provider vars{model}
from the runtime -m flag
- For Gemini API (non-Vertex) providers using x-goog-api-key, continue to use standard API key flows. Vertex AI flows use Bearer tokens obtained via the SA JWT exchange.
Troubleshooting
- "Missing provider vars"
- Set vars: lc providers vars vertex_google set project
<id>
; lc providers vars vertex_google set location<loc>
- List vars: lc providers vars vertex_google list
- Set vars: lc providers vars vertex_google set project
- "Invalid service account JSON" or "Invalid base64 format"
- Re-run: lc keys add vertex_google and paste the base64 version: cat sa.json | base64
- "Authentication failed"
- Ensure the Service Account has Vertex AI permissions (e.g., Vertex AI User) and the
project/location
are correct - If using a VPC-SC or restricted org policy, confirm token audience and scopes are permitted
- Ensure the Service Account has Vertex AI permissions (e.g., Vertex AI User) and the
Provider Features
Custom Endpoints
Some providers use non-standard paths for their endpoints:
- Models Path: Where to fetch available models (default:
/models
) - Chat Path: Where to send chat requests (default:
/chat/completions
)
Response Format Support
LLM Client automatically detects and handles multiple response formats:
- OpenAI Format (most providers)
- Llama API Format (Meta)
- Cohere Format
- Anthropic Format (Claude)
Special Provider: Hugging Face Router
The HF router expands models with their available providers:
lc providers models hf
# Output shows:
# • Qwen/Qwen3-32B:groq
# • Qwen/Qwen3-32B:hyperbolic
# • meta-llama/Llama-3.3-70B-Instruct:together
Use the full model:provider
format when prompting:
lc --provider hf -m "Qwen/Qwen3-32B:groq" "Hello"
Troubleshooting
"Provider not found"
- Check spelling:
lc providers list
- Ensure provider is added:
lc providers add <name> <url>
"Invalid endpoint"
- Verify URL includes protocol:
https://
orhttp://
- Check if custom paths are needed:
-m
and-c
flags
"Authentication failed"
- Verify API key:
lc keys add <provider>
- Check if custom headers are needed
- Some providers use
x-api-key
instead ofAuthorization
See Also
Next Steps
- API Key Management - Secure key storage
- Models Command - Advanced model filtering
- Advanced Features - Vector database and more