Clother gives you one install and one command pattern to switch between Claude Code providers instantly. Instead of editing environment variables, endpoints, and model names by hand every time you want to try a different backend, Clother handles all of that at launch time.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/jolehuit/clother/llms.txt
Use this file to discover all available pages before exploring further.
The problem
Switching Claude Code providers normally means:- Setting
ANTHROPIC_BASE_URLto the new endpoint - Swapping
ANTHROPIC_AUTH_TOKENfor the right API key - Remembering (or looking up) the correct model name
- Updating any launcher scripts or shell aliases you rely on
How Clother solves it
Clother is a single Go binary. At install time it creates aclother-<provider> symlink for every supported provider. At runtime it reads the symlink name, loads the matching profile and API key, sets the required environment variables, then hands off to the real Claude binary — transparently.
For example, launching clother-zai is equivalent to:
Key features
One install, many providers
A single binary manages every provider. Add or switch providers without reinstalling anything.
Provider-aware launchers
Each provider gets its own
clother-<provider> command. Run them side by side, script them, or wire them into your editor.Resume compatibility
Clother preserves the
claude --resume workflow across providers, including cross-provider session handoff.Local backend support
Ollama, LM Studio, and llama.cpp work out of the box — no API key required.
OpenRouter access
Route to 100+ models through OpenRouter using custom aliases and the
clother-or-<alias> pattern.Secure secret storage
API keys are stored in
~/.local/share/clother/secrets.env with permissions set to 600.Supported provider categories
- Cloud — Anthropic (native), Z.AI, MiniMax, Kimi, Moonshot AI, DeepSeek, Xiaomi MiMo, Alibaba Coding Plan
- China endpoints — Z.AI China, MiniMax China, Volcengine, Alibaba China
- Local backends — Ollama (port 11434), LM Studio (port 1234), llama.cpp (port 8000)
- OpenRouter — Any model available on OpenRouter, accessed via custom aliases