AGENTS.md

 1<!--
 2SPDX-FileCopyrightText: Amolith <amolith@secluded.site>
 3
 4SPDX-License-Identifier: CC0-1.0
 5-->
 6
 7# AGENTS.md
 8
 9Garble is a CLI that accepts text on stdin, sends it to an LLM for transformation based on user directions, and prints the result to stdout. Single request/response—no conversation history.
10
11## Commands
12
13```sh
14make build     # Build the project
15make test      # Run tests
16make check     # Check formatting (CI-style)
17make format    # Auto-format source files
18make install   # Install to ~/.local/bin (override with PREFIX=)
19make clean     # Remove build artifacts
20```
21
22## Architecture
23
24```
25stdin → garble.gleam → provider dispatch → stdout
2627         config.gleam (TOML from ~/.config/garble/config.toml)
2829       providers.gleam (fetches provider list from catwalk.charm.sh)
3031    openai/anthropic/gemini via starlet, or openai_compat.gleam for custom endpoints
32```
33
34**Flow:**
351. CLI args parsed with glint, merged with config file (CLI wins)
362. stdin read entirely into memory
373. `prompts.gleam` wraps input in `<raw>` tags, directions in `<directions>` tags
384. Request sent to provider; response code block extracted
395. Output printed to stdout
40
41**Module responsibilities:**
42- `garble.gleam` — Entry point, CLI definition, provider dispatch, API key resolution
43- `config.gleam` — TOML config loading from XDG paths, CLI/config merging
44- `providers.gleam` — Fetches provider/model list from remote API, validation
45- `prompts.gleam` — System prompt, user message construction, code block extraction
46- `openai_compat.gleam` — Manual HTTP client for OpenAI-compatible endpoints (when starlet doesn't apply)
47
48## Key Libraries
49
50| Package | Purpose |
51|---------|---------|
52| starlet | LLM API client (OpenAI, Anthropic, Gemini) |
53| glint | CLI arg/flag parsing |
54| tom | TOML parsing |
55| shellout | Running shell commands (for `api_key_cmd`) |
56| stdin | Reading from stdin |
57| gleescript | Bundling into escript for installation |
58
59These are documented on Context7 under their respective org/project paths.
60
61## Conventions
62
63- **File size limit:** Keep files under 200 lines. If approaching that, split by responsibility.
64- **Stdlib first:** Prefer gleam_stdlib and official gleam-lang packages. Use community packages when stdlib lacks functionality. Never use raw Erlang/Elixir FFI without a Gleam wrapper.
65- **Error handling:** Use `Result` types throughout. The `halt/1` FFI call exits with status code on fatal errors.
66- **Config precedence:** CLI flags → config file → environment variables
67
68## Testing
69
70Tests use gleeunit. Test files mirror source structure in `test/`. Functions ending in `_test` are automatically discovered.
71
72The `prompts_test.gleam` demonstrates the pattern: test public functions, use `should` assertions, keep tests focused on one behavior each.
73
74## Gotchas
75
76- **External halt:** `garble.gleam` uses `@external(erlang, "erlang", "halt")` for non-zero exit codes since Gleam lacks this natively.
77- **Provider list is remote:** `providers.gleam` fetches from `https://catwalk.charm.sh/v2/providers` at runtime—network errors are possible.
78- **Code block extraction:** The system prompt instructs models to wrap output in fenced code blocks; `prompts.extract_code_block` strips them. If the model doesn't comply, raw output passes through.
79- **API key resolution order:** `api_key_cmd` (shell command) → `api_key` (literal) → environment variable from provider config
80- **Custom OpenAI-compat client:** We use our own `openai_compat.gleam` instead of starlet's `openai.with_url` because most OpenAI-compatible providers don't implement the `/responses` endpoint that starlet expects—they only support the traditional `/chat/completions` endpoint.