Skip to main content

Commands Overview

LLM Client provides a comprehensive set of commands organized by functionality. Most commands have short aliases for faster usage.

Command Structure

lc [COMMAND] [SUBCOMMAND] [OPTIONS] [ARGS]

Command Categories

Core Commands

CommandAliasDescription
lc "prompt"-Send a direct prompt using defaults
lc chatlc cStart interactive chat session
lc providerslc pManage LLM providers
lc modelslc mList and filter available models
lc keyslc kManage API keys
lc configlc coConfigure defaults
lc logslc lView and manage chat history
lc completions-Generate shell completion scripts

Audio Commands

CommandAliasDescription
lc transcribelc trConvert audio to text
lc tts-Convert text to speech

Advanced Commands

CommandAliasDescription
lc embedlc eGenerate and store embeddings
lc vectorslc vManage vector databases
lc similarlc sSearch for similar content
lc searchlc seWeb search integration
lc synclc sySync configuration to cloud
lc mcp-Manage MCP servers
lc aliaslc aManage model aliases
lc templateslc tManage templates
lc proxylc prRun proxy server
lc web-chat-proxylc wWeb chat proxy

Direct Prompts

The simplest way to use lc:

# Using defaults
lc "What is the capital of France?"

# Specify model
lc -m openai:gpt-4 "Write a Python function"

# Specify both
lc --provider openrouter -m "claude-3.5-sonnet" "Explain quantum computing"

# With vector database context (RAG)
lc -v knowledge "What do you know about machine learning?"

# With MCP tools
lc -t fetch "What's the latest news about AI?"

# With web search
lc --use-search brave "What are the latest AI developments?"

# With audio attachments
lc "What is being discussed?" --audio meeting.mp3

# Transcribe audio
lc transcribe recording.wav

# Text to speech
lc tts "Hello world" --output greeting.mp3

Global Options

These options work with most commands:

  • -p, --provider <PROVIDER> - Specify provider
  • -m, --model <MODEL> - Specify model
  • -s, --system <SYSTEM_PROMPT> - Set system prompt
  • --max-tokens <MAX_TOKENS> - Maximum number of tokens
  • --temperature <TEMPERATURE> - Adjust response randomness
  • -a, --attach <ATTACHMENTS> - Attach files
  • -u, --audio <AUDIO_FILES> - Attach audio files for transcription
  • -t, --tools <TOOLS> - Include MCP tools (comma-separated)
  • -v, --vectordb <VECTORDB> - Use vector database for context
  • -d, --debug - Enable debug mode
  • -c, --continue - Continue previous session
  • --cid <CHAT_ID> - Specify chat ID
  • --use-search <SEARCH> - Use search results as context
  • -h, --help - Show help information
  • -V, --version - Show version

Command Aliases

LLM Client uses intuitive aliases to speed up your workflow:

Single Letter Aliases

  • cchat
  • pproviders
  • mmodels
  • kkeys
  • llogs
  • eembed
  • vvectors
  • ssimilar

Two Letter Aliases

  • coconfig
  • sysync
  • sesearch
  • prproxy

Subcommand Aliases

  • aadd
  • rremove or recent or refresh
  • llist
  • sshow or setup or stats
  • uupdate
  • ddelete or dump
  • iinfo

Examples

Quick Provider Setup

# Long form
lc providers add openai https://api.openai.com/v1
lc keys add openai
lc providers models openai

# Short form (same result)
lc p a openai https://api.openai.com/v1
lc k a openai
lc p m openai

Chat Workflow

# Start chat
lc c -m gpt-4

# View recent chats
lc l r

# Get last answer
lc l r a

# Extract code from last answer
lc l r a c

Vector Database Workflow

# Create embeddings
lc e -m text-embedding-3-small -v docs "Important information"

# Search similar content
lc s -v docs "related query"

# Use in chat
lc c -v docs -m gpt-4

MCP Tools Workflow

# Add MCP server
lc mcp add fetch "uvx mcp-server-fetch" --type stdio

# List functions
lc mcp functions fetch

# Use in prompt
lc -t fetch "Get current weather in Tokyo"

# Use in chat
lc c -m gpt-4 -t fetch

Search Integration Workflow

# Add search providers (auto-detected from URL)
lc search provider add brave https://api.search.brave.com/res/v1/web/search
lc search provider add ddg https://api.duckduckgo.com/ # Free option!
lc search provider add jina https://s.jina.ai/

# Set API keys (DuckDuckGo doesn't need one)
lc search provider set brave X-Subscription-Token YOUR_API_KEY
lc search provider set jina Authorization YOUR_API_KEY

# Direct search
lc search query brave "latest AI news" -f json
lc search query ddg "free search query" -f json

# Advanced Jina features
lc search provider set jina X-Engine direct # Enable full content reading
lc search query jina "comprehensive topic" -f json

# Use in prompts
lc --use-search brave "What's happening in AI today?"
lc --use-search ddg "Free search integration test"
lc --use-search jina "Research with full content"

Audio Workflow

# Transcribe audio files
lc transcribe interview.mp3 --format json
lc tr podcast.wav --language en

# Text to speech
lc tts "Welcome message" --voice nova
lc tts --file script.txt --output narration.mp3

# Use audio in chat
lc "Summarize this meeting" --audio meeting_recording.mp3
lc c -m gpt-4 --audio interview.wav

Shell Completions Setup

# Setup completions for your shell
lc completions zsh >> ~/.zshrc # Zsh
lc completions bash >> ~/.bashrc # Bash
lc completions fish > ~/.config/fish/completions/lc.fish # Fish

# Or use eval for dynamic completions (recommended)
echo 'eval "$(lc completions zsh)"' >> ~/.zshrc
source ~/.zshrc

# Now enjoy tab completion!
lc -p <TAB> # Shows your configured providers
lc -m <TAB> # Shows available models
lc providers <TAB> # Shows: add, remove, list, models

Getting Help

Every command has built-in help:

# General help
lc --help

# Command help
lc providers --help
lc p --help

# Subcommand help
lc providers add --help
lc p a --help

Next Steps

Explore specific command documentation: