Skip to content

config.toml reference

eunha stores all configuration in ~/.eunha/config.toml. The file is created on first launch with permissions 0600 — readable only by you.

Never store this file in version control. Keys are written here instead of the system keychain by design (no external dependencies).

[github]
token = "ghp_..." # required — GitHub PAT with read:user scope
[llm]
provider = "openai" # "openai" | "anthropic" | "ollama"
api_key = "sk-..." # required for openai/anthropic; omit for ollama
model = "gpt-4o-mini" # optional — provider default used if omitted
base_url = "http://localhost:11434" # ollama only, optional
output_language = "en" # optional — language for LLM descriptions (default: en)

A GitHub Personal Access Token. Classic token with read:user scope is sufficient. Fine-grained tokens work if they have read access to your starred repos.

Selects the LLM backend. One of openai, anthropic, or ollama.

Which model to use. Defaults:

  • OpenAI: gpt-4o-mini
  • Anthropic: claude-haiku-4-5-20251001
  • Ollama: required — no default

The language for LLM-generated descriptions. Accepts any natural language name (en, ko, ja, etc.). When set, the prompt instructs the LLM to respond in that language.

Ollama only. The URL of your Ollama server. Default: http://localhost:11434.