AGENTS.md

AGENTS.md

Garble is a CLI that accepts text on stdin, sends it to an LLM for transformation based on user directions, and prints the result to stdout. Single request/response—no conversation history.

Commands

make build     # Build the project
make test      # Run tests
make check     # Check formatting (CI-style)
make format    # Auto-format source files
make install   # Install to ~/.local/bin (override with PREFIX=)
make clean     # Remove build artifacts

Architecture

stdin → garble.gleam → provider dispatch → stdout
             ↓
         config.gleam (TOML from ~/.config/garble/config.toml)
             ↓
       providers.gleam (fetches provider list from catwalk.charm.sh)
             ↓
    openai/anthropic/gemini via starlet, or openai_compat.gleam for custom endpoints

Flow:

  1. CLI args parsed with glint, merged with config file (CLI wins)
  2. stdin read entirely into memory
  3. prompts.gleam wraps input in <raw> tags, directions in <directions> tags
  4. Request sent to provider; response code block extracted
  5. Output printed to stdout

Module responsibilities:

  • garble.gleam — Entry point, CLI definition, provider dispatch, API key resolution
  • config.gleam — TOML config loading from XDG paths, CLI/config merging
  • providers.gleam — Fetches provider/model list from remote API, validation
  • prompts.gleam — System prompt, user message construction, code block extraction
  • openai_compat.gleam — Manual HTTP client for OpenAI-compatible endpoints (when starlet doesn't apply)

Key Libraries

Package Purpose
starlet LLM API client (OpenAI, Anthropic, Gemini)
glint CLI arg/flag parsing
tom TOML parsing
shellout Running shell commands (for api_key_cmd)
stdin Reading from stdin
gleescript Bundling into escript for installation

These are documented on Context7 under their respective org/project paths.

Conventions

  • File size limit: Keep files under 200 lines. If approaching that, split by responsibility.
  • Stdlib first: Prefer gleam_stdlib and official gleam-lang packages. Use community packages when stdlib lacks functionality. Never use raw Erlang/Elixir FFI without a Gleam wrapper.
  • Error handling: Use Result types throughout. The halt/1 FFI call exits with status code on fatal errors.
  • Config precedence: CLI flags → config file → environment variables

Testing

Tests use gleeunit. Test files mirror source structure in test/. Functions ending in _test are automatically discovered.

The prompts_test.gleam demonstrates the pattern: test public functions, use should assertions, keep tests focused on one behavior each.

Gotchas

  • External halt: garble.gleam uses @external(erlang, "erlang", "halt") for non-zero exit codes since Gleam lacks this natively.
  • Provider list is remote: providers.gleam fetches from https://catwalk.charm.sh/v2/providers at runtime—network errors are possible.
  • Code block extraction: The system prompt instructs models to wrap output in fenced code blocks; prompts.extract_code_block strips them. If the model doesn't comply, raw output passes through.
  • API key resolution order: api_key_cmd (shell command) → api_key (literal) → environment variable from provider config
  • Custom OpenAI-compat client: We use our own openai_compat.gleam instead of starlet's openai.with_url because most OpenAI-compatible providers don't implement the /responses endpoint that starlet expects—they only support the traditional /chat/completions endpoint.