Commands Overview
LLM Client provides a comprehensive set of commands organized by functionality. Most commands have short aliases for faster usage.
Command Structure
lc [COMMAND] [SUBCOMMAND] [OPTIONS] [ARGS]
Command Categories
Core Commands
| Command | Alias | Description |
|---|---|---|
lc "prompt" | - | Send a direct prompt using defaults |
lc chat | lc c | Start interactive chat session |
lc providers | lc p | Manage LLM providers |
lc models | lc m | List and filter available models |
lc keys | lc k | Manage API keys |
lc config | lc co | Configure defaults |
lc logs | lc l | View and manage chat history |
lc completions | - | Generate shell completion scripts |
Audio Commands
| Command | Alias | Description |
|---|---|---|
lc transcribe | lc tr | Convert audio to text |
lc tts | - | Convert text to speech |
Advanced Commands
| Command | Alias | Description |
|---|---|---|
lc embed | lc e | Generate and store embeddings |
lc vectors | lc v | Manage vector databases |
lc similar | lc s | Search for similar content |
lc search | lc se | Web search integration |
lc sync | lc sy | Sync configuration to cloud |
lc mcp | - | Manage MCP servers |
lc alias | lc a | Manage model aliases |
lc templates | lc t | Manage templates |
lc proxy | lc pr | Run proxy server |
lc web-chat-proxy | lc w | Web chat proxy |
Direct Prompts
The simplest way to use lc:
# Using defaults
lc "What is the capital of France?"
# Specify model
lc -m openai:gpt-4 "Write a Python function"
# Specify both
lc --provider openrouter -m "claude-3.5-sonnet" "Explain quantum computing"
# With vector database context (RAG)
lc -v knowledge "What do you know about machine learning?"
# With MCP tools
lc -t fetch "What's the latest news about AI?"
# With web search
lc --use-search brave "What are the latest AI developments?"
# With audio attachments
lc "What is being discussed?" --audio meeting.mp3
# Transcribe audio
lc transcribe recording.wav
# Text to speech
lc tts "Hello world" --output greeting.mp3
Global Options
These options work with most commands:
-p, --provider <PROVIDER>- Specify provider-m, --model <MODEL>- Specify model-s, --system <SYSTEM_PROMPT>- Set system prompt--max-tokens <MAX_TOKENS>- Maximum number of tokens--temperature <TEMPERATURE>- Adjust response randomness-a, --attach <ATTACHMENTS>- Attach files-u, --audio <AUDIO_FILES>- Attach audio files for transcription-t, --tools <TOOLS>- Include MCP tools (comma-separated)-v, --vectordb <VECTORDB>- Use vector database for context-d, --debug- Enable debug mode-c, --continue- Continue previous session--cid <CHAT_ID>- Specify chat ID--use-search <SEARCH>- Use search results as context-h, --help- Show help information-V, --version- Show version
Command Aliases
LLM Client uses intuitive aliases to speed up your workflow:
Single Letter Aliases
c→chatp→providersm→modelsk→keysl→logse→embedv→vectorss→similar
Two Letter Aliases
co→configsy→syncse→searchpr→proxy
Subcommand Aliases
a→addr→removeorrecentorrefreshl→lists→showorsetuporstatsu→updated→deleteordumpi→info
Examples
Quick Provider Setup
# Long form
lc providers add openai https://api.openai.com/v1
lc keys add openai
lc providers models openai
# Short form (same result)
lc p a openai https://api.openai.com/v1
lc k a openai
lc p m openai
Chat Workflow
# Start chat
lc c -m gpt-4
# View recent chats
lc l r
# Get last answer
lc l r a
# Extract code from last answer
lc l r a c
Vector Database Workflow
# Create embeddings
lc e -m text-embedding-3-small -v docs "Important information"
# Search similar content
lc s -v docs "related query"
# Use in chat
lc c -v docs -m gpt-4
MCP Tools Workflow
# Add MCP server
lc mcp add fetch "uvx mcp-server-fetch" --type stdio
# List functions
lc mcp functions fetch
# Use in prompt
lc -t fetch "Get current weather in Tokyo"
# Use in chat
lc c -m gpt-4 -t fetch
Search Integration Workflow
# Add search providers (auto-detected from URL)
lc search provider add brave https://api.search.brave.com/res/v1/web/search
lc search provider add ddg https://api.duckduckgo.com/ # Free option!
lc search provider add jina https://s.jina.ai/
# Set API keys (DuckDuckGo doesn't need one)
lc search provider set brave X-Subscription-Token YOUR_API_KEY
lc search provider set jina Authorization YOUR_API_KEY
# Direct search
lc search query brave "latest AI news" -f json
lc search query ddg "free search query" -f json
# Advanced Jina features
lc search provider set jina X-Engine direct # Enable full content reading
lc search query jina "comprehensive topic" -f json
# Use in prompts
lc --use-search brave "What's happening in AI today?"
lc --use-search ddg "Free search integration test"
lc --use-search jina "Research with full content"
Audio Workflow
# Transcribe audio files
lc transcribe interview.mp3 --format json
lc tr podcast.wav --language en
# Text to speech
lc tts "Welcome message" --voice nova
lc tts --file script.txt --output narration.mp3
# Use audio in chat
lc "Summarize this meeting" --audio meeting_recording.mp3
lc c -m gpt-4 --audio interview.wav
Shell Completions Setup
# Setup completions for your shell
lc completions zsh >> ~/.zshrc # Zsh
lc completions bash >> ~/.bashrc # Bash
lc completions fish > ~/.config/fish/completions/lc.fish # Fish
# Or use eval for dynamic completions (recommended)
echo 'eval "$(lc completions zsh)"' >> ~/.zshrc
source ~/.zshrc
# Now enjoy tab completion!
lc -p <TAB> # Shows your configured providers
lc -m <TAB> # Shows available models
lc providers <TAB> # Shows: add, remove, list, models
Getting Help
Every command has built-in help:
# General help
lc --help
# Command help
lc providers --help
lc p --help
# Subcommand help
lc providers add --help
lc p a --help
Next Steps
Explore specific command documentation: