Troubleshooting
Common issues and their solutions when using LLM Client.
Installation Issues
Rust Installation Fails
Problem: Can't install Rust or cargo commands not found
Solutions:
-
Ensure you have a C compiler installed:
- Linux:
sudo apt install build-essential
- macOS:
xcode-select --install
- Windows: Install Visual Studio Build Tools
- Linux:
-
Try the official installer:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Add cargo to PATH:
source $HOME/.cargo/env
Build Errors
Problem: cargo build --release
fails
Solutions:
-
Update Rust:
rustup update
-
Clean and rebuild:
cargo clean
cargo build --release -
Check for missing dependencies in error messages
Provider Issues
"No providers configured"
Problem: Getting error when trying to use lc
Solution:
# Add a provider first
lc providers add openai https://api.openai.com/v1
# Verify it's added
lc providers list
"Provider not found"
Problem: Provider name not recognized
Solutions:
-
Check exact spelling:
lc providers list
-
Ensure provider is added:
lc providers add <name> <url>
"Invalid endpoint URL"
Problem: Provider endpoint rejected
Solutions:
- Include protocol:
https://
orhttp://
- Don't include trailing paths unless needed
- For custom endpoints, use
-m
and-c
flags
API Key Issues
"No API key found"
Problem: Provider requires API key
Solution:
lc keys add <provider>
# Enter key when prompted
"Authentication failed"
Problem: API key rejected
Solutions:
-
Verify key is correct:
lc keys remove <provider>
lc keys add <provider> -
Check if provider needs custom headers:
# For Claude
lc providers headers claude add x-api-key <key>
lc providers headers claude add anthropic-version 2023-06-01 -
Ensure you have API credits/quota
Model Issues
"Model not found"
Problem: Specified model doesn't exist
Solutions:
-
List available models:
lc providers models <provider>
# or
lc models -
Use exact model name from the list
-
For HF router, use format:
model:provider
"Context length exceeded"
Problem: Input too long for model
Solutions:
-
Use a model with larger context:
lc models --ctx 128k
-
Reduce input length
-
Split into multiple prompts
Vector Database Issues
"Database not found"
Problem: Vector database doesn't exist
Solutions:
-
Check available databases:
lc vectors list
-
Create database by adding content:
lc embed -m text-embedding-3-small -v <name> "content"
"Dimension mismatch"
Problem: Embedding dimensions don't match
Solutions:
-
Check database model:
lc vectors info <database>
-
Use the same model for all operations
-
Delete and recreate with consistent model:
lc vectors delete <database>
lc embed -m <model> -v <database> "content"
"No similar content found"
Problem: Similarity search returns nothing
Solutions:
-
Verify database has content:
lc vectors info <database>
-
Try different search terms
-
Check if content is relevant
Chat Issues
"Session not found"
Problem: Can't continue chat session
Solutions:
-
List recent sessions:
lc logs recent
-
Use correct session ID:
lc chat -m <model> --cid <session-id>
"Chat history lost"
Problem: Previous messages not remembered
Solutions:
-
Ensure you're in chat mode:
lc chat -m <model>
-
Don't use
/clear
unless you want to reset -
Check logs database:
lc logs stats
Performance Issues
Slow Response Times
Solutions:
-
Use a faster model:
lc -m gpt-3.5-turbo "prompt"
-
Check network connection
-
Try a different provider
High Token Usage
Solutions:
- Use concise prompts
- Set up system prompts for consistency
- Use smaller models when appropriate
Sync Issues
"Sync failed"
Problem: Can't sync configuration
Solutions:
-
Check provider configuration:
lc sync configure s3 show
-
Verify credentials:
lc sync configure s3 setup
-
Check network/firewall settings
"Decryption failed"
Problem: Can't decrypt synced files
Solution: Use the same password used for encryption
Proxy Issues
Proxy -h
Conflict
Problem: Using -h
flag with proxy command doesn't work as expected
Cause: The -h
flag conflicts between --host
and --help
options
Solutions:
-
Use full flag names to avoid ambiguity:
# Instead of: lc proxy -h 0.0.0.0
lc proxy --host 0.0.0.0 -
Use
--help
instead of-h
for help:lc proxy --help
Reference: See Proxy Command Documentation for all available flags
Web Chat Proxy Port Issues
Problem: "Port already in use" or "Address already in use" errors
Solutions:
-
Check what's using the port:
netstat -tlnp | grep :8080
# or on macOS
lsof -i :8080 -
Use a different port:
lc web-chat-proxy start anthropic --port 3000
-
Stop existing proxy servers:
lc web-chat-proxy list
lc web-chat-proxy stop anthropic -
Kill process using the port (if needed):
# Replace PID with actual process ID from netstat/lsof
kill -9 <PID>
Reference: See Web Chat Proxy Documentation for more details
Sync Provider Authentication Errors
Problem: "Authentication failed" when syncing with cloud providers
Solutions by Provider:
AWS:
-
Check AWS credentials:
cat ~/.aws/credentials
-
Set environment variables:
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret -
Verify IAM permissions for S3 bucket access
Azure:
-
Login to Azure CLI:
az login
-
Check current account:
az account show
GCP:
-
Authenticate with service account:
gcloud auth activate-service-account --key-file=key.json
-
Or login interactively:
gcloud auth login
General Steps:
-
Reconfigure the provider:
lc sync configure <provider>
-
Test connectivity:
lc sync providers
-
Check network/firewall settings
Reference: See Sync Command Documentation for provider-specific setup
Debug Mode
For detailed error information:
# Enable debug logging
export RUST_LOG=debug
lc -m gpt-4 "test prompt"
Getting Help
If you're still having issues:
- Check the FAQ
- Search GitHub Issues
- Create a new issue with:
- Error message
- Steps to reproduce
- System information
- Debug logs
Common Error Messages
Error | Meaning | Solution |
---|---|---|
"No providers configured" | No providers added | Add a provider |
"API request failed" | Network or API error | Check connection and API key |
"Model not found" | Invalid model name | Use exact model name |
"Rate limit exceeded" | Too many requests | Wait and retry |
"Invalid API key" | Wrong or expired key | Update API key |
"Context length exceeded" | Input too long | Use shorter input or larger model |