config.toml reference
eunha stores all configuration in ~/.eunha/config.toml. The file is created on first launch with permissions 0600 — readable only by you.
Never store this file in version control. Keys are written here instead of the system keychain by design (no external dependencies).
Schema
Section titled “Schema”[github]token = "ghp_..." # required — GitHub PAT with read:user scope
[llm]provider = "openai" # "openai" | "anthropic" | "ollama"api_key = "sk-..." # required for openai/anthropic; omit for ollamamodel = "gpt-4o-mini" # optional — provider default used if omittedbase_url = "http://localhost:11434" # ollama only, optionaloutput_language = "en" # optional — language for LLM descriptions (default: en)Field details
Section titled “Field details”github.token
Section titled “github.token”A GitHub Personal Access Token. Classic token with read:user scope is sufficient. Fine-grained tokens work if they have read access to your starred repos.
llm.provider
Section titled “llm.provider”Selects the LLM backend. One of openai, anthropic, or ollama.
llm.model
Section titled “llm.model”Which model to use. Defaults:
- OpenAI:
gpt-4o-mini - Anthropic:
claude-haiku-4-5-20251001 - Ollama: required — no default
llm.output_language
Section titled “llm.output_language”The language for LLM-generated descriptions. Accepts any natural language name (en, ko, ja, etc.). When set, the prompt instructs the LLM to respond in that language.
llm.base_url
Section titled “llm.base_url”Ollama only. The URL of your Ollama server. Default: http://localhost:11434.