LLM providers
eunha supports three providers. All descriptions use the same prompt contract regardless of provider.
OpenAI
Section titled “OpenAI”- Recommended model:
gpt-4o-mini(fast, cheap, reliable JSON mode) - Requires
OPENAI_API_KEYor a token in config - Uses the Chat Completions API with
response_format: { type: "json_object" }
[llm]provider = "openai"api_key = "sk-..."model = "gpt-4o-mini"Anthropic
Section titled “Anthropic”- Recommended model:
claude-haiku-4-5-20251001(fast + affordable) - Uses the Messages API
- JSON output is enforced via the prompt — Claude follows the format reliably
[llm]provider = "anthropic"api_key = "sk-ant-..."model = "claude-haiku-4-5-20251001"Ollama (local / offline)
Section titled “Ollama (local / offline)”Ollama lets you run open-source models locally. Privacy-first, no API costs, works offline.
Requirements:
- Ollama installed and running
- A model with JSON mode support (Ollama
--format json)
Tested models: llama3.2, mistral, qwen2.5
[llm]provider = "ollama"model = "llama3.2"base_url = "http://localhost:11434"Switching providers
Section titled “Switching providers”Edit ~/.eunha/config.toml and change provider. Existing descriptions are not affected. Re-describe with shift-D (single) or shift-A (all stale) after switching.