.gitignore π
@@ -0,0 +1,4 @@
+*.beam
+*.ez
+/build
+erl_crash.dump
Amolith created
CLI that accepts text on stdin, sends it to an LLM for transformation
based on user directions, and prints the result to stdout.
- glint for CLI argument parsing
- TOML config from ~/.config/garble/config.toml
- Provider discovery from catwalk.charm.sh
- Support for OpenAI, Anthropic, Gemini via starlet
- Custom OpenAI-compatible endpoints
Assisted-by: Claude Opus 4.5 via Crush
.gitignore | 4
AGENTS.md | 73 +++++++++++++
Makefile | 36 ++++++
README.md | 110 ++++++++++++++++++++
crush.json | 9 +
gleam.toml | 36 ++++++
manifest.toml | 46 ++++++++
src/config.gleam | 96 +++++++++++++++++
src/garble.gleam | 233 +++++++++++++++++++++++++++++++++++++++++++
src/openai_compat.gleam | 82 +++++++++++++++
src/prompts.gleam | 58 ++++++++++
src/providers.gleam | 114 +++++++++++++++++++++
test/garble_test.gleam | 13 ++
test/prompts_test.gleam | 82 +++++++++++++++
14 files changed, 992 insertions(+)
@@ -0,0 +1,4 @@
+*.beam
+*.ez
+/build
+erl_crash.dump
@@ -0,0 +1,73 @@
+# AGENTS.md
+
+Garble is a CLI that accepts text on stdin, sends it to an LLM for transformation based on user directions, and prints the result to stdout. Single request/responseβno conversation history.
+
+## Commands
+
+```sh
+make build # Build the project
+make test # Run tests
+make check # Check formatting (CI-style)
+make format # Auto-format source files
+make install # Install to ~/.local/bin (override with PREFIX=)
+make clean # Remove build artifacts
+```
+
+## Architecture
+
+```
+stdin β garble.gleam β provider dispatch β stdout
+ β
+ config.gleam (TOML from ~/.config/garble/config.toml)
+ β
+ providers.gleam (fetches provider list from catwalk.charm.sh)
+ β
+ openai/anthropic/gemini via starlet, or openai_compat.gleam for custom endpoints
+```
+
+**Flow:**
+1. CLI args parsed with glint, merged with config file (CLI wins)
+2. stdin read entirely into memory
+3. `prompts.gleam` wraps input in `<raw>` tags, directions in `<directions>` tags
+4. Request sent to provider; response code block extracted
+5. Output printed to stdout
+
+**Module responsibilities:**
+- `garble.gleam` β Entry point, CLI definition, provider dispatch, API key resolution
+- `config.gleam` β TOML config loading from XDG paths, CLI/config merging
+- `providers.gleam` β Fetches provider/model list from remote API, validation
+- `prompts.gleam` β System prompt, user message construction, code block extraction
+- `openai_compat.gleam` β Manual HTTP client for OpenAI-compatible endpoints (when starlet doesn't apply)
+
+## Key Libraries
+
+| Package | Purpose |
+|---------|---------|
+| starlet | LLM API client (OpenAI, Anthropic, Gemini) |
+| glint | CLI arg/flag parsing |
+| tom | TOML parsing |
+| shellout | Running shell commands (for `api_key_cmd`) |
+| stdin | Reading from stdin |
+| gleescript | Bundling into escript for installation |
+
+These are documented on Context7 under their respective org/project paths.
+
+## Conventions
+
+- **File size limit:** Keep files under 200 lines. If approaching that, split by responsibility.
+- **Stdlib first:** Prefer gleam_stdlib and official gleam-lang packages. Use community packages when stdlib lacks functionality. Never use raw Erlang/Elixir FFI without a Gleam wrapper.
+- **Error handling:** Use `Result` types throughout. The `halt/1` FFI call exits with status code on fatal errors.
+- **Config precedence:** CLI flags β config file β environment variables
+
+## Testing
+
+Tests use gleeunit. Test files mirror source structure in `test/`. Functions ending in `_test` are automatically discovered.
+
+The `prompts_test.gleam` demonstrates the pattern: test public functions, use `should` assertions, keep tests focused on one behavior each.
+
+## Gotchas
+
+- **External halt:** `garble.gleam` uses `@external(erlang, "erlang", "halt")` for non-zero exit codes since Gleam lacks this natively.
+- **Provider list is remote:** `providers.gleam` fetches from `https://catwalk.charm.sh/v2/providers` at runtimeβnetwork errors are possible.
+- **Code block extraction:** The system prompt instructs models to wrap output in fenced code blocks; `prompts.extract_code_block` strips them. If the model doesn't comply, raw output passes through.
+- **API key resolution order:** `api_key_cmd` (shell command) β `api_key` (literal) β environment variable from provider config
@@ -0,0 +1,36 @@
+.PHONY: build test check format install clean help
+
+PREFIX ?= $(HOME)/.local
+
+build:
+ gleam build
+
+test:
+ gleam test
+
+check:
+ gleam format --check
+
+format:
+ gleam format
+
+install: build
+ gleam run -m gleescript -- --out=$(PREFIX)/bin
+
+clean:
+ rm -rf build
+
+help:
+ @echo "Usage: make [target]"
+ @echo ""
+ @echo "Targets:"
+ @echo " build Build the project"
+ @echo " test Run tests"
+ @echo " check Check formatting"
+ @echo " format Format source files"
+ @echo " install Install to PREFIX/bin (default: ~/.local/bin)"
+ @echo " clean Remove build artifacts"
+ @echo " help Show this help"
+ @echo ""
+ @echo "Variables:"
+ @echo " PREFIX Installation prefix (default: ~/.local)"
@@ -0,0 +1,110 @@
+<!--
+SPDX-FileCopyrightText: Amolith <amolith@secluded.site>
+
+SPDX-License-Identifier: CC0-1.0
+-->
+
+# garble
+
+[](https://hex.pm/packages/garble)
+[](https://hexdocs.pm/garble/)
+[](https://api.reuse.software/info/git.secluded.site/garble)
+[](https://liberapay.com/Amolith/)
+
+Transform stdin with an LLM. Pipe text in, get transformed text out.
+
+## tl;dr
+
+```bash
+# Fix typos and grammar
+echo "teh quikc brown fox" | garble --directions "fix typos"
+
+# Translate
+cat letter.txt | garble --directions "translate to Pirate"
+
+# Reformat
+pbpaste | garble --directions "convert to markdown table" | pbcopy
+```
+
+## Installation
+
+```bash
+# Clone and install to ~/.local/bin
+git clone https://git.secluded.site/garble
+cd garble
+make install
+
+# Or install elsewhere
+make install PREFIX=/usr/local
+```
+
+Requires Erlang/OTP and the Gleam toolchain.
+
+## Usage
+
+```bash
+garble [--provider PROVIDER] [--model MODEL] [--directions "..."]
+```
+
+All flags are optional if configured in `~/.config/garble/config.toml`.
+
+### Flags
+
+- `--provider` β Provider ID (e.g. `openai`, `anthropic`, `google`)
+- `--model` β Model ID (e.g. `gpt-4o`, `claude-3-opus`, `gemini-1.5-pro`)
+- `--directions` β Instructions for how to transform the input
+
+### Configuration
+
+Create `~/.config/garble/config.toml`:
+
+```toml
+provider = "anthropic"
+model = "claude-sonnet-4-20250514"
+directions = "fix grammar and spelling"
+
+# API key options (in order of precedence):
+# 1. Run a command to get the key
+api_key_cmd = "op read 'op://Private/Anthropic/credential'"
+
+# 2. Or set it directly (not recommended)
+# api_key = "sk-..."
+```
+
+If neither `api_key_cmd` nor `api_key` is set, garble falls back to the
+provider's environment variable (e.g. `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`).
+
+CLI flags override config file values.
+
+## Contributions
+
+Patch requests are in [amolith/llm-projects] on [pr.pico.sh]. You don't
+need a new account to contribute, you don't need to fork this repo, you
+don't need to fiddle with `git send-email`, you don't need to faff with
+your email client to get `git request-pull` working...
+
+You just need:
+
+- Git
+- SSH
+- An SSH key
+
+```sh
+# Clone this repo, make your changes, and commit them
+# Create a new patch request with
+git format-patch origin/main --stdout | ssh pr.pico.sh pr create amolith/llm-projects
+# After potential feedback, submit a revision to an existing patch request with
+git format-patch origin/main --stdout | ssh pr.pico.sh pr add {prID}
+# List patch requests
+ssh pr.pico.sh pr ls amolith/llm-projects
+```
+
+See "How do Patch Requests work?" on [pr.pico.sh]'s home page for a more
+complete example workflow.
+
+[amolith/llm-projects]: https://pr.pico.sh/r/amolith/llm-projects
+[pr.pico.sh]: https://pr.pico.sh
+
+## License
+
+AGPL-3.0-or-later
@@ -0,0 +1,9 @@
+{
+ "$schema": "https://charm.land/crush.json",
+ "lsp": {
+ "gleam": {
+ "command": "gleam",
+ "args": ["lsp"]
+ }
+ }
+}
@@ -0,0 +1,36 @@
+name = "garble"
+version = "1.0.0"
+
+# Fill out these fields if you intend to generate HTML documentation or publish
+# your project to the Hex package manager.
+#
+# description = ""
+# licences = ["Apache-2.0"]
+# repository = { type = "github", user = "", repo = "" }
+# links = [{ title = "Website", href = "" }]
+#
+# For a full reference of all the available options, you can have a look at
+# https://gleam.run/writing-gleam/gleam-toml/.
+description = "Garble stdin to stdout with an LLM"
+licences = ["AGPL-3.0-or-later"]
+repository = { type = "custom", url = "https://git.secluded.site/garble" }
+
+[dependencies]
+gleam_stdlib = ">= 0.44.0 and < 2.0.0"
+stdin = ">= 2.0.2 and < 3.0.0"
+gleam_yielder = ">= 1.1.0 and < 2.0.0"
+argv = ">= 1.0.2 and < 2.0.0"
+glint = ">= 1.2.1 and < 2.0.0"
+gleam_httpc = ">= 5.0.0 and < 6.0.0"
+gleam_http = ">= 4.3.0 and < 5.0.0"
+gleam_json = ">= 3.1.0 and < 4.0.0"
+starlet = ">= 1.0.1 and < 2.0.0"
+envoy = ">= 1.1.0 and < 2.0.0"
+tom = ">= 2.0.0 and < 3.0.0"
+simplifile = ">= 2.3.2 and < 3.0.0"
+filepath = ">= 1.1.2 and < 2.0.0"
+shellout = ">= 1.7.0 and < 2.0.0"
+
+[dev-dependencies]
+gleeunit = ">= 1.0.0 and < 2.0.0"
+gleescript = ">= 1.5.2 and < 2.0.0"
@@ -0,0 +1,46 @@
+# This file was generated by Gleam
+# You typically do not need to edit this file
+
+packages = [
+ { name = "argv", version = "1.0.2", build_tools = ["gleam"], requirements = [], otp_app = "argv", source = "hex", outer_checksum = "BA1FF0929525DEBA1CE67256E5ADF77A7CDDFE729E3E3F57A5BDCAA031DED09D" },
+ { name = "envoy", version = "1.1.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "envoy", source = "hex", outer_checksum = "850DA9D29D2E5987735872A2B5C81035146D7FE19EFC486129E44440D03FD832" },
+ { name = "filepath", version = "1.1.2", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "filepath", source = "hex", outer_checksum = "B06A9AF0BF10E51401D64B98E4B627F1D2E48C154967DA7AF4D0914780A6D40A" },
+ { name = "gleam_community_ansi", version = "1.4.3", build_tools = ["gleam"], requirements = ["gleam_community_colour", "gleam_regexp", "gleam_stdlib"], otp_app = "gleam_community_ansi", source = "hex", outer_checksum = "8A62AE9CC6EA65BEA630D95016D6C07E4F9973565FA3D0DE68DC4200D8E0DD27" },
+ { name = "gleam_community_colour", version = "2.0.2", build_tools = ["gleam"], requirements = ["gleam_json", "gleam_stdlib"], otp_app = "gleam_community_colour", source = "hex", outer_checksum = "E34DD2C896AC3792151EDA939DA435FF3B69922F33415ED3C4406C932FBE9634" },
+ { name = "gleam_erlang", version = "1.3.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_erlang", source = "hex", outer_checksum = "1124AD3AA21143E5AF0FC5CF3D9529F6DB8CA03E43A55711B60B6B7B3874375C" },
+ { name = "gleam_http", version = "4.3.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_http", source = "hex", outer_checksum = "82EA6A717C842456188C190AFB372665EA56CE13D8559BF3B1DD9E40F619EE0C" },
+ { name = "gleam_httpc", version = "5.0.0", build_tools = ["gleam"], requirements = ["gleam_erlang", "gleam_http", "gleam_stdlib"], otp_app = "gleam_httpc", source = "hex", outer_checksum = "C545172618D07811494E97AAA4A0FB34DA6F6D0061FDC8041C2F8E3BE2B2E48F" },
+ { name = "gleam_json", version = "3.1.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_json", source = "hex", outer_checksum = "44FDAA8847BE8FC48CA7A1C089706BD54BADCC4C45B237A992EDDF9F2CDB2836" },
+ { name = "gleam_regexp", version = "1.1.1", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_regexp", source = "hex", outer_checksum = "9C215C6CA84A5B35BB934A9B61A9A306EC743153BE2B0425A0D032E477B062A9" },
+ { name = "gleam_stdlib", version = "0.68.0", build_tools = ["gleam"], requirements = [], otp_app = "gleam_stdlib", source = "hex", outer_checksum = "EEC7E7A18B8A53B7A28B7F0A2198CE53BAFF05D45479E4806C387EDF26DA842D" },
+ { name = "gleam_time", version = "1.6.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_time", source = "hex", outer_checksum = "0DF3834D20193F0A38D0EB21F0A78D48F2EC276C285969131B86DF8D4EF9E762" },
+ { name = "gleam_yielder", version = "1.1.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleam_yielder", source = "hex", outer_checksum = "8E4E4ECFA7982859F430C57F549200C7749823C106759F4A19A78AEA6687717A" },
+ { name = "gleescript", version = "1.5.2", build_tools = ["gleam"], requirements = ["argv", "filepath", "gleam_erlang", "gleam_stdlib", "simplifile", "snag", "tom"], otp_app = "gleescript", source = "hex", outer_checksum = "27AC58481742ED29D9B37C506F78958A8AD798750A79ED08C8F8AFBA8F23563B" },
+ { name = "gleeunit", version = "1.9.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "gleeunit", source = "hex", outer_checksum = "DA9553CE58B67924B3C631F96FE3370C49EB6D6DC6B384EC4862CC4AAA718F3C" },
+ { name = "glint", version = "1.2.1", build_tools = ["gleam"], requirements = ["gleam_community_ansi", "gleam_community_colour", "gleam_stdlib", "snag"], otp_app = "glint", source = "hex", outer_checksum = "2214C7CEFDE457CEE62140C3D4899B964E05236DA74E4243DFADF4AF29C382BB" },
+ { name = "jscheam", version = "2.0.0", build_tools = ["gleam"], requirements = ["gleam_json", "gleam_stdlib"], otp_app = "jscheam", source = "hex", outer_checksum = "F8825DD6BC0C5B2BBB41A7B01A0D0AC89876B2305DF05CBB92A450E3B7734FC6" },
+ { name = "shellout", version = "1.7.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "shellout", source = "hex", outer_checksum = "1BDC03438FEB97A6AF3E396F4ABEB32BECF20DF2452EC9A8C0ACEB7BDDF70B14" },
+ { name = "simplifile", version = "2.3.2", build_tools = ["gleam"], requirements = ["filepath", "gleam_stdlib"], otp_app = "simplifile", source = "hex", outer_checksum = "E049B4DACD4D206D87843BCF4C775A50AE0F50A52031A2FFB40C9ED07D6EC70A" },
+ { name = "snag", version = "1.2.0", build_tools = ["gleam"], requirements = ["gleam_stdlib"], otp_app = "snag", source = "hex", outer_checksum = "274F41D6C3ECF99F7686FDCE54183333E41D2C1CA5A3A673F9A8B2C7A4401077" },
+ { name = "starlet", version = "1.0.1", build_tools = ["gleam"], requirements = ["gleam_http", "gleam_httpc", "gleam_json", "gleam_stdlib", "jscheam"], otp_app = "starlet", source = "hex", outer_checksum = "4E288CB970EFF9BE11EB476A0449A7A9BFAA993BE25CAA7B7E8A0A5B3A76AE5C" },
+ { name = "stdin", version = "2.0.2", build_tools = ["gleam"], requirements = ["gleam_stdlib", "gleam_yielder"], otp_app = "stdin", source = "hex", outer_checksum = "437084939CE094E06D32D07EB19B74473D08895AB817C03550CEDFDBF0DFDC19" },
+ { name = "tom", version = "2.0.0", build_tools = ["gleam"], requirements = ["gleam_stdlib", "gleam_time"], otp_app = "tom", source = "hex", outer_checksum = "74D0C5A3761F7A7D06994755D4D5AD854122EF8E9F9F76A3E7547606D8C77091" },
+]
+
+[requirements]
+argv = { version = ">= 1.0.2 and < 2.0.0" }
+envoy = { version = ">= 1.1.0 and < 2.0.0" }
+filepath = { version = ">= 1.1.2 and < 2.0.0" }
+gleam_http = { version = ">= 4.3.0 and < 5.0.0" }
+gleam_httpc = { version = ">= 5.0.0 and < 6.0.0" }
+gleam_json = { version = ">= 3.1.0 and < 4.0.0" }
+gleam_stdlib = { version = ">= 0.44.0 and < 2.0.0" }
+gleam_yielder = { version = ">= 1.1.0 and < 2.0.0" }
+gleescript = { version = ">= 1.5.2 and < 2.0.0" }
+gleeunit = { version = ">= 1.0.0 and < 2.0.0" }
+glint = { version = ">= 1.2.1 and < 2.0.0" }
+shellout = { version = ">= 1.7.0 and < 2.0.0" }
+simplifile = { version = ">= 2.3.2 and < 3.0.0" }
+starlet = { version = ">= 1.0.1 and < 2.0.0" }
+stdin = { version = ">= 2.0.2 and < 3.0.0" }
+tom = { version = ">= 2.0.0 and < 3.0.0" }
@@ -0,0 +1,96 @@
+import envoy
+import filepath
+import gleam/dict.{type Dict}
+import gleam/result
+import simplifile
+import tom.{type Toml}
+
+pub type Config {
+ Config(
+ provider: String,
+ model: String,
+ api_key: String,
+ api_key_cmd: String,
+ directions: String,
+ )
+}
+
+pub fn default() -> Config {
+ Config(provider: "", model: "", api_key: "", api_key_cmd: "", directions: "")
+}
+
+/// Load config from XDG_CONFIG_HOME/garble/config.toml or ~/.config/garble/config.toml
+pub fn load() -> Config {
+ case config_path() {
+ Error(_) -> default()
+ Ok(path) ->
+ case simplifile.read(path) {
+ Error(_) -> default()
+ Ok(content) ->
+ case tom.parse(content) {
+ Error(_) -> default()
+ Ok(parsed) -> parse_config(parsed)
+ }
+ }
+ }
+}
+
+fn config_path() -> Result(String, Nil) {
+ let config_dir = case envoy.get("XDG_CONFIG_HOME") {
+ Ok(xdg) -> xdg
+ Error(_) ->
+ case envoy.get("HOME") {
+ Ok(home) -> filepath.join(home, ".config")
+ Error(_) -> ""
+ }
+ }
+
+ case config_dir {
+ "" -> Error(Nil)
+ dir -> {
+ let path = filepath.join(dir, "garble/config.toml")
+ case simplifile.is_file(path) {
+ Ok(True) -> Ok(path)
+ _ -> Error(Nil)
+ }
+ }
+ }
+}
+
+fn parse_config(parsed: Dict(String, Toml)) -> Config {
+ Config(
+ provider: get_string(parsed, "provider"),
+ model: get_string(parsed, "model"),
+ api_key: get_string(parsed, "api_key"),
+ api_key_cmd: get_string(parsed, "api_key_cmd"),
+ directions: get_string(parsed, "directions"),
+ )
+}
+
+fn get_string(parsed: Dict(String, Toml), key: String) -> String {
+ tom.get_string(parsed, [key])
+ |> result.unwrap("")
+}
+
+/// Merge CLI flags over config values. CLI takes precedence when non-empty.
+pub fn merge(
+ cfg: Config,
+ cli_provider cli_provider: String,
+ cli_model cli_model: String,
+ cli_directions cli_directions: String,
+) -> Config {
+ Config(
+ provider: prefer_nonempty(cli_provider, cfg.provider),
+ model: prefer_nonempty(cli_model, cfg.model),
+ api_key: cfg.api_key,
+ api_key_cmd: cfg.api_key_cmd,
+ directions: prefer_nonempty(cli_directions, cfg.directions),
+ )
+}
+
+fn prefer_nonempty(cli: String, fallback: String) -> String {
+ case cli {
+ "" -> fallback
+ val -> val
+ }
+}
@@ -0,0 +1,233 @@
+import argv
+import config
+import envoy
+import gleam/int
+import gleam/io
+import gleam/option.{None, Some}
+import gleam/result
+import gleam/string
+import gleam/yielder
+import glint
+import openai_compat
+import prompts
+import providers.{type Provider}
+import shellout
+import starlet
+import starlet/anthropic
+import starlet/gemini
+import starlet/openai
+import stdin
+
+@external(erlang, "erlang", "halt")
+fn halt(status: Int) -> Nil
+
+pub fn main() {
+ glint.new()
+ |> glint.with_name("garble")
+ |> glint.pretty_help(glint.default_pretty_help())
+ |> glint.add(at: [], do: garble_command())
+ |> glint.run(argv.load().arguments)
+}
+
+fn garble_command() -> glint.Command(Nil) {
+ use <- glint.command_help("Transform stdin with an LLM")
+ use directions <- glint.flag(
+ glint.string_flag("directions")
+ |> glint.flag_default("")
+ |> glint.flag_help("Directions for how to transform the input"),
+ )
+ use model <- glint.flag(
+ glint.string_flag("model")
+ |> glint.flag_default("")
+ |> glint.flag_help("Model to use (e.g. gpt-4o, claude-3-opus)"),
+ )
+ use provider <- glint.flag(
+ glint.string_flag("provider")
+ |> glint.flag_default("")
+ |> glint.flag_help("Provider (e.g. openai, anthropic)"),
+ )
+ use _, _args, flags <- glint.command()
+
+ // Load config file (if present) and merge with CLI flags
+ let cfg = config.load()
+ let assert Ok(directions_cli) = directions(flags)
+ let assert Ok(model_cli) = model(flags)
+ let assert Ok(provider_cli) = provider(flags)
+ let merged =
+ config.merge(
+ cfg,
+ cli_provider: provider_cli,
+ cli_model: model_cli,
+ cli_directions: directions_cli,
+ )
+
+ // Read all stdin into a single string
+ let input =
+ stdin.read_lines()
+ |> yielder.to_list()
+ |> string.join("")
+
+ // Build the user message with raw input and directions
+ let user_message = prompts.build_user_message(input, merged.directions)
+
+ case providers.get_provider(merged.provider) {
+ Ok(provider_info) -> {
+ case send_request(provider_info, merged, prompts.system(), user_message) {
+ Ok(response) -> io.print(prompts.extract_code_block(response))
+ Error(msg) -> {
+ io.println_error(msg)
+ halt(1)
+ }
+ }
+ }
+ Error(providers.FetchError(msg)) -> {
+ io.println_error("Error fetching providers: " <> msg)
+ halt(1)
+ }
+ Error(providers.ProviderNotFound(id)) -> {
+ io.println_error("Unknown provider: " <> id)
+ halt(1)
+ }
+ Error(providers.ModelNotFound(provider, model)) -> {
+ io.println_error(
+ "Unknown model '" <> model <> "' for provider '" <> provider <> "'",
+ )
+ halt(1)
+ }
+ }
+}
+
+fn send_request(
+ provider: Provider,
+ cfg: config.Config,
+ system: String,
+ user_prompt: String,
+) -> Result(String, String) {
+ use api_key <- result.try(get_api_key(provider, cfg))
+
+ case provider.provider_type {
+ "openai" -> send_openai(api_key, None, cfg.model, system, user_prompt)
+ "anthropic" -> send_anthropic(api_key, None, cfg.model, system, user_prompt)
+ "google" -> send_gemini(api_key, cfg.model, system, user_prompt)
+ "openai-compat" -> {
+ case provider.api_endpoint {
+ Some(endpoint) ->
+ openai_compat.send(endpoint, api_key, cfg.model, system, user_prompt)
+ None -> Error("No endpoint configured for " <> provider.id)
+ }
+ }
+ other -> Error("Unsupported provider type: " <> other)
+ }
+}
+
+fn send_openai(
+ api_key: String,
+ base_url: option.Option(String),
+ model: String,
+ system_prompt: String,
+ user_prompt: String,
+) -> Result(String, String) {
+ let client = case base_url {
+ Some(url) -> openai.new_with_base_url(api_key, url)
+ None -> openai.new(api_key)
+ }
+
+ starlet.chat(client, model)
+ |> starlet.system(system_prompt)
+ |> starlet.user(user_prompt)
+ |> starlet.send()
+ |> result.map(fn(resp) { starlet.text(resp.1) })
+ |> result.map_error(format_starlet_error)
+}
+
+fn send_anthropic(
+ api_key: String,
+ base_url: option.Option(String),
+ model: String,
+ system_prompt: String,
+ user_prompt: String,
+) -> Result(String, String) {
+ let client = case base_url {
+ Some(url) -> anthropic.new_with_base_url(api_key, url)
+ None -> anthropic.new(api_key)
+ }
+
+ starlet.chat(client, model)
+ |> starlet.system(system_prompt)
+ |> starlet.user(user_prompt)
+ |> starlet.send()
+ |> result.map(fn(resp) { starlet.text(resp.1) })
+ |> result.map_error(format_starlet_error)
+}
+
+fn send_gemini(
+ api_key: String,
+ model: String,
+ system_prompt: String,
+ user_prompt: String,
+) -> Result(String, String) {
+ let client = gemini.new(api_key)
+
+ starlet.chat(client, model)
+ |> starlet.system(system_prompt)
+ |> starlet.user(user_prompt)
+ |> starlet.send()
+ |> result.map(fn(resp) { starlet.text(resp.1) })
+ |> result.map_error(format_starlet_error)
+}
+
+fn format_starlet_error(err: starlet.StarletError) -> String {
+ case err {
+ starlet.Transport(msg) -> "Network error: " <> msg
+ starlet.Http(status, body) ->
+ "HTTP " <> int.to_string(status) <> ": " <> body
+ starlet.Decode(msg) -> "Parse error: " <> msg
+ starlet.Provider(name, msg, _) -> name <> " error: " <> msg
+ starlet.Tool(_error) -> "Tool error"
+ starlet.RateLimited(retry_after) -> {
+ case retry_after {
+ Some(secs) -> "Rate limited, retry after " <> int.to_string(secs) <> "s"
+ None -> "Rate limited"
+ }
+ }
+ }
+}
+
+fn get_api_key(provider: Provider, cfg: config.Config) -> Result(String, String) {
+ // Precedence: api_key_cmd > api_key > environment variable
+ case cfg.api_key_cmd {
+ "" ->
+ case cfg.api_key {
+ "" -> get_api_key_from_env(provider)
+ key -> Ok(key)
+ }
+ cmd -> run_api_key_cmd(cmd)
+ }
+}
+
+fn run_api_key_cmd(cmd: String) -> Result(String, String) {
+ case shellout.command(run: "sh", with: ["-c", cmd], in: ".", opt: []) {
+ Ok(output) -> Ok(string.trim(output))
+ Error(#(_status, msg)) -> Error("api_key_cmd failed: " <> msg)
+ }
+}
+
+fn get_api_key_from_env(provider: Provider) -> Result(String, String) {
+ case provider.api_key_env {
+ Some(env_ref) -> {
+ let env_var = resolve_env_var(env_ref)
+ case envoy.get(env_var) {
+ Ok(key) -> Ok(key)
+ Error(_) -> Error("Missing environment variable: " <> env_var)
+ }
+ }
+ None -> Error("No API key configured for provider: " <> provider.id)
+ }
+}
+
+fn resolve_env_var(value: String) -> String {
+ case value {
+ "$" <> rest -> rest
+ other -> other
+ }
+}
@@ -0,0 +1,82 @@
+import gleam/dynamic/decode
+import gleam/http
+import gleam/http/request
+import gleam/httpc
+import gleam/json
+import gleam/list
+
+pub fn send(
+ endpoint: String,
+ api_key: String,
+ model: String,
+ system_prompt: String,
+ user_prompt: String,
+) -> Result(String, String) {
+ let messages = build_messages(system_prompt, user_prompt)
+ let body =
+ json.object([
+ #("model", json.string(model)),
+ #("messages", json.array(messages, fn(m) { m })),
+ ])
+ |> json.to_string
+
+ let url = endpoint <> "/chat/completions"
+
+ case request.to(url) {
+ Error(_) -> Error("Invalid endpoint URL: " <> endpoint)
+ Ok(req) -> {
+ let req =
+ req
+ |> request.set_method(http.Post)
+ |> request.set_header("content-type", "application/json")
+ |> request.set_header("authorization", "Bearer " <> api_key)
+ |> request.set_body(body)
+
+ case httpc.send(req) {
+ Error(_) -> Error("Network error")
+ Ok(resp) if resp.status >= 200 && resp.status < 300 ->
+ parse_response(resp.body)
+ Ok(resp) -> Error("HTTP " <> resp.body)
+ }
+ }
+ }
+}
+
+fn build_messages(system_prompt: String, user_prompt: String) -> List(json.Json) {
+ case system_prompt {
+ "" -> [user_message(user_prompt)]
+ sys -> [system_message(sys), user_message(user_prompt)]
+ }
+}
+
+fn system_message(content: String) -> json.Json {
+ json.object([
+ #("role", json.string("system")),
+ #("content", json.string(content)),
+ ])
+}
+
+fn user_message(content: String) -> json.Json {
+ json.object([
+ #("role", json.string("user")),
+ #("content", json.string(content)),
+ ])
+}
+
+fn parse_response(body: String) -> Result(String, String) {
+ let choice_decoder = decode.at(["message", "content"], decode.string)
+
+ let response_decoder = {
+ use choices <- decode.field("choices", decode.list(choice_decoder))
+ decode.success(choices)
+ }
+
+ case json.parse(body, response_decoder) {
+ Error(_) -> Error("Failed to parse response")
+ Ok(choices) ->
+ case list.first(choices) {
+ Ok(content) -> Ok(content)
+ Error(_) -> Error("No response content")
+ }
+ }
+}
@@ -0,0 +1,58 @@
+import gleam/option.{type Option, None, Some}
+import gleam/string
+
+/// The system prompt that establishes garble's role as a text transformer.
+const system_prompt = "You are a text transformation tool. Transform the text in <raw> according to the user's directions.
+
+<rules>
+- Follow the user's transformation directions precisely
+- If no directions are provided, return the input unchanged
+- Output ONLY the transformed textβno explanations, commentary, or metadata
+- Never reference the original text or the fact that a transformation occurred
+- Wrap your entire output in a fenced code block (```)
+</rules>"
+
+/// Returns the system prompt.
+pub fn system() -> String {
+ system_prompt
+}
+
+/// Build the user message with raw input sandwiched between direction references.
+pub fn build_user_message(raw_input: String, directions: String) -> String {
+ let directions_block = case string.trim(directions) {
+ "" -> ""
+ trimmed -> "\n\n<directions>\n" <> trimmed <> "\n</directions>"
+ }
+
+ "<raw>\n"
+ <> raw_input
+ <> "\n</raw>"
+ <> directions_block
+ <> "\n\nTransform the text in <raw> according to the directions above."
+}
+
+/// Extract content from within a fenced code block.
+/// Returns the content inside the first code block found, or the full text if none.
+pub fn extract_code_block(text: String) -> String {
+ case find_code_block(text) {
+ Some(content) -> content
+ None -> text
+ }
+}
+
+fn find_code_block(text: String) -> Option(String) {
+ case string.split_once(text, "```") {
+ Error(_) -> None
+ Ok(#(_before, after_open)) -> {
+ // Skip the language tag (everything until first newline)
+ let content_start = case string.split_once(after_open, "\n") {
+ Ok(#(_lang, rest)) -> rest
+ Error(_) -> after_open
+ }
+ case string.split_once(content_start, "```") {
+ Ok(#(content, _after_close)) -> Some(string.trim(content))
+ Error(_) -> None
+ }
+ }
+ }
+}
@@ -0,0 +1,114 @@
+import gleam/dynamic/decode
+import gleam/http/request
+import gleam/httpc
+import gleam/int
+import gleam/json
+import gleam/list
+import gleam/option.{type Option, None, Some}
+import gleam/result
+
+const providers_url = "https://catwalk.charm.sh/v2/providers"
+
+pub type Provider {
+ Provider(
+ id: String,
+ provider_type: String,
+ api_key_env: Option(String),
+ api_endpoint: Option(String),
+ models: List(Model),
+ )
+}
+
+pub type Model {
+ Model(id: String)
+}
+
+pub type ValidationError {
+ FetchError(String)
+ ProviderNotFound(String)
+ ModelNotFound(provider: String, model: String)
+}
+
+pub fn validate(
+ provider_id: String,
+ model_id: String,
+) -> Result(Nil, ValidationError) {
+ use providers <- result.try(fetch_providers())
+ use provider <- result.try(find_provider(providers, provider_id))
+ find_model(provider, model_id)
+}
+
+pub fn get_provider(provider_id: String) -> Result(Provider, ValidationError) {
+ use providers <- result.try(fetch_providers())
+ find_provider(providers, provider_id)
+}
+
+fn fetch_providers() -> Result(List(Provider), ValidationError) {
+ let assert Ok(req) = request.to(providers_url)
+
+ case httpc.send(req) {
+ Ok(resp) if resp.status == 200 -> parse_providers(resp.body)
+ Ok(resp) -> Error(FetchError("HTTP " <> int.to_string(resp.status)))
+ Error(_) -> Error(FetchError("Network error"))
+ }
+}
+
+fn parse_providers(body: String) -> Result(List(Provider), ValidationError) {
+ let model_decoder = {
+ use id <- decode.field("id", decode.string)
+ decode.success(Model(id:))
+ }
+
+ let provider_decoder = {
+ use id <- decode.field("id", decode.string)
+ use provider_type <- decode.field("type", decode.string)
+ use api_key_env <- decode.optional_field(
+ "api_key",
+ None,
+ decode.string |> decode.map(Some),
+ )
+ use api_endpoint <- decode.optional_field(
+ "api_endpoint",
+ None,
+ decode.string |> decode.map(Some),
+ )
+ use models <- decode.field("models", decode.list(model_decoder))
+ decode.success(Provider(
+ id:,
+ provider_type:,
+ api_key_env:,
+ api_endpoint:,
+ models:,
+ ))
+ }
+
+ json.parse(body, decode.list(provider_decoder))
+ |> result.map_error(fn(_) { FetchError("Invalid JSON") })
+}
+
+fn find_provider(
+ providers: List(Provider),
+ provider_id: String,
+) -> Result(Provider, ValidationError) {
+ providers
+ |> list.find(fn(p) { p.id == provider_id })
+ |> result.map_error(fn(_) { ProviderNotFound(provider_id) })
+}
+
+fn find_model(
+ provider: Provider,
+ model_id: String,
+) -> Result(Nil, ValidationError) {
+ provider.models
+ |> list.find(fn(m) { m.id == model_id })
+ |> result.map(fn(_) { Nil })
+ |> result.map_error(fn(_) { ModelNotFound(provider.id, model_id) })
+}
+
+/// Resolve an environment variable reference like "$OPENAI_API_KEY" to just "OPENAI_API_KEY"
+pub fn resolve_env_var_name(value: String) -> Option(String) {
+ case value {
+ "$" <> rest -> Some(rest)
+ _ -> None
+ }
+}
@@ -0,0 +1,13 @@
+import gleeunit
+
+pub fn main() -> Nil {
+ gleeunit.main()
+}
+
+// gleeunit test functions end in `_test`
+pub fn hello_world_test() {
+ let name = "Joe"
+ let greeting = "Hello, " <> name <> "!"
+
+ assert greeting == "Hello, Joe!"
+}
@@ -0,0 +1,82 @@
+import gleam/string
+import gleeunit/should
+import prompts
+
+// --- system tests ---
+
+pub fn system_returns_prompt_test() {
+ let result = prompts.system()
+ should.be_true(result |> contains("text transformation tool"))
+ should.be_true(result |> contains("<raw>"))
+}
+
+// --- build_user_message tests ---
+
+pub fn build_user_message_with_directions_test() {
+ let result = prompts.build_user_message("hello", "make it loud")
+ should.be_true(result |> contains("<raw>\nhello\n</raw>"))
+ should.be_true(
+ result |> contains("<directions>\nmake it loud\n</directions>"),
+ )
+ should.be_true(result |> contains("Transform the text in <raw>"))
+}
+
+pub fn build_user_message_without_directions_test() {
+ let result = prompts.build_user_message("hello", "")
+ should.be_true(result |> contains("<raw>\nhello\n</raw>"))
+ should.be_false(result |> contains("<directions>"))
+ should.be_true(result |> contains("Transform the text in <raw>"))
+}
+
+pub fn build_user_message_trims_directions_test() {
+ let result = prompts.build_user_message("hello", " ")
+ should.be_false(result |> contains("<directions>"))
+}
+
+// --- extract_code_block tests ---
+
+pub fn extract_simple_code_block_test() {
+ let input = "```\nhello world\n```"
+ prompts.extract_code_block(input)
+ |> should.equal("hello world")
+}
+
+pub fn extract_code_block_with_language_test() {
+ let input = "```json\n{\"key\": \"value\"}\n```"
+ prompts.extract_code_block(input)
+ |> should.equal("{\"key\": \"value\"}")
+}
+
+pub fn extract_code_block_with_surrounding_text_test() {
+ let input = "Here's the result:\n```\ntransformed\n```\nHope that helps!"
+ prompts.extract_code_block(input)
+ |> should.equal("transformed")
+}
+
+pub fn extract_returns_full_text_when_no_code_block_test() {
+ let input = "Just plain text without any code blocks."
+ prompts.extract_code_block(input)
+ |> should.equal(input)
+}
+
+pub fn extract_uses_first_code_block_test() {
+ let input = "```\nfirst\n```\n\n```\nsecond\n```"
+ prompts.extract_code_block(input)
+ |> should.equal("first")
+}
+
+pub fn extract_handles_empty_code_block_test() {
+ let input = "```\n```"
+ prompts.extract_code_block(input)
+ |> should.equal("")
+}
+
+pub fn extract_handles_unclosed_code_block_test() {
+ let input = "```\nunclosed content here"
+ prompts.extract_code_block(input)
+ |> should.equal(input)
+}
+
+fn contains(haystack: String, needle: String) -> Bool {
+ string.contains(haystack, needle)
+}