AGENTS.md
Garble is a CLI that accepts text on stdin, sends it to an LLM for transformation based on user directions, and prints the result to stdout. Single request/response—no conversation history.
Commands
make build # Build the project
make test # Run tests
make check # Check formatting (CI-style)
make format # Auto-format source files
make install # Install to ~/.local/bin (override with PREFIX=)
make clean # Remove build artifacts
Architecture
stdin → garble.gleam → provider dispatch → stdout
↓
config.gleam (TOML from ~/.config/garble/config.toml)
↓
providers.gleam (fetches provider list from catwalk.charm.sh)
↓
openai/anthropic/gemini via starlet, or openai_compat.gleam for custom endpoints
Flow:
- CLI args parsed with glint, merged with config file (CLI wins)
- stdin read entirely into memory
prompts.gleamwraps input in<raw>tags, directions in<directions>tags- Request sent to provider; response code block extracted
- Output printed to stdout
Module responsibilities:
garble.gleam— Entry point, CLI definition, provider dispatch, API key resolutionconfig.gleam— TOML config loading from XDG paths, CLI/config mergingproviders.gleam— Fetches provider/model list from remote API, validationprompts.gleam— System prompt, user message construction, code block extractionopenai_compat.gleam— Manual HTTP client for OpenAI-compatible endpoints (when starlet doesn't apply)
Key Libraries
| Package | Purpose |
|---|---|
| starlet | LLM API client (OpenAI, Anthropic, Gemini) |
| glint | CLI arg/flag parsing |
| tom | TOML parsing |
| shellout | Running shell commands (for api_key_cmd) |
| stdin | Reading from stdin |
| gleescript | Bundling into escript for installation |
These are documented on Context7 under their respective org/project paths.
Conventions
- File size limit: Keep files under 200 lines. If approaching that, split by responsibility.
- Stdlib first: Prefer gleam_stdlib and official gleam-lang packages. Use community packages when stdlib lacks functionality. Never use raw Erlang/Elixir FFI without a Gleam wrapper.
- Error handling: Use
Resulttypes throughout. Thehalt/1FFI call exits with status code on fatal errors. - Config precedence: CLI flags → config file → environment variables
Testing
Tests use gleeunit. Test files mirror source structure in test/. Functions ending in _test are automatically discovered.
The prompts_test.gleam demonstrates the pattern: test public functions, use should assertions, keep tests focused on one behavior each.
Gotchas
- External halt:
garble.gleamuses@external(erlang, "erlang", "halt")for non-zero exit codes since Gleam lacks this natively. - Provider list is remote:
providers.gleamfetches fromhttps://catwalk.charm.sh/v2/providersat runtime—network errors are possible. - Code block extraction: The system prompt instructs models to wrap output in fenced code blocks;
prompts.extract_code_blockstrips them. If the model doesn't comply, raw output passes through. - API key resolution order:
api_key_cmd(shell command) →api_key(literal) → environment variable from provider config - Custom OpenAI-compat client: We use our own
openai_compat.gleaminstead of starlet'sopenai.with_urlbecause most OpenAI-compatible providers don't implement the/responsesendpoint that starlet expects—they only support the traditional/chat/completionsendpoint.