README.md

  1# ask - AI CLI tool
  2
  3A lightweight bash script for querying AI models via the OpenRouter API, optimized for direct, executable output.
  4
  5## Features
  6
  7- **Direct Output** - Returns executable commands and answers without markdown formatting
  8- **Multiple Models** - Quick access to Mercury Coder, Gemini, Claude Sonnet, Kimi, and Qwen models
  9- **Streaming Support** - Real-time response streaming for long outputs
 10- **Provider Routing** - Automatic fallback between providers for reliability
 11- **Performance Metrics** - Shows response time and tokens/second
 12- **Pipe Support** - Works seamlessly with Unix pipes and stdin
 13
 14## Quick start
 15
 16```bash
 17# Clone and setup
 18git clone https://github.com/yourusername/ask.git
 19cd ask
 20chmod +x ask
 21
 22# Set your API key
 23export OPENROUTER_API_KEY="your-api-key-here"
 24
 25# Test it
 26./ask "What is 2+2?"
 27```
 28
 29## Installation
 30
 31### Option 1: Using install.sh (recommended)
 32```bash
 33sudo ./install.sh
 34```
 35
 36### Option 2: Manual installation
 37```bash
 38chmod +x ask
 39sudo cp ask /usr/local/bin/
 40```
 41
 42### Persistent API key setup
 43
 44Add to your shell profile (`~/.bashrc`, `~/.zshrc`, etc.):
 45```bash
 46export OPENROUTER_API_KEY="your-api-key-here"
 47```
 48
 49## Usage
 50
 51### Basic usage
 52
 53```bash
 54ask "What is 2+2?"
 55ask "Write a Python hello world"
 56```
 57
 58### Model selection
 59
 60```bash
 61# Default model (Mercury Coder - optimized for code)
 62ask "Write a Python function"
 63
 64# Shorthand flags for quick model switching
 65ask -c "prompt"  # Mercury Coder (default, best for code)
 66ask -g "prompt"  # Gemini 2.5 Flash (fast, general purpose)
 67ask -s "prompt"  # Claude Sonnet 4 (complex reasoning)
 68ask -k "prompt"  # Kimi K2 (long context)
 69ask -q "prompt"  # Qwen 235B (large model)
 70
 71# Custom model by full name
 72ask -m "openai/gpt-4o" "Explain this concept"
 73```
 74
 75### Provider routing
 76
 77Specify provider order for fallback support:
 78
 79```bash
 80ask --provider "openai,together" "Generate code"
 81```
 82
 83This will try OpenAI first, then fall back to Together if needed.
 84
 85### System prompts
 86
 87```bash
 88# Custom system prompt
 89ask --system "You are a pirate" "Tell me about sailing"
 90
 91# Disable system prompt for raw model behavior
 92ask -r "What is 2+2?"
 93```
 94
 95### Streaming mode
 96
 97Get responses as they're generated:
 98
 99```bash
100ask --stream "Tell me a long story"
101```
102
103### Pipe input
104
105```bash
106echo "Fix this code: print('hello world)" | ask
107cat script.py | ask "Review this code"
108```
109
110## Options
111
112| Option | Description |
113|--------|-------------|
114| `-c` | Use Mercury Coder (default) |
115| `-g` | Use Google Gemini 2.5 Flash |
116| `-s` | Use Claude Sonnet 4 |
117| `-k` | Use Moonshotai Kimi K2 |
118| `-q` | Use Qwen3 235B |
119| `-m MODEL` | Use custom model |
120| `-r` | Disable system prompt |
121| `--stream` | Enable streaming output |
122| `--system` | Set custom system prompt |
123| `--provider` | Set provider order (comma-separated) |
124| `-h, --help` | Show help message |
125
126## Common use cases
127
128### Command generation
129```bash
130# Get executable commands directly
131ask "Command to find files larger than 100MB"
132# Output: find . -type f -size +100M
133
134ask "ffmpeg command to convert mp4 to gif"
135# Output: ffmpeg -i input.mp4 -vf "fps=10,scale=320:-1:flags=lanczos" output.gif
136```
137
138### Code generation
139```bash
140# Generate code snippets
141ask "Python function to calculate factorial"
142
143# Code review
144cat script.py | ask "Find potential bugs in this code"
145```
146
147### Quick answers
148```bash
149# Calculations
150ask "What is 18% of 2450?"
151# Output: 441
152
153# Technical questions
154ask "What port does PostgreSQL use?"
155# Output: 5432
156```
157
158### Advanced usage
159```bash
160# Chain commands
161ask "List all Python files" | ask "Generate a script to check syntax of these files"
162
163# Use with other tools
164docker ps -a | ask "Which containers are using the most memory?"
165
166# Provider fallback for reliability
167ask --provider "anthropic,openai" "Complex analysis task"
168```
169
170## Requirements
171
172### Dependencies
173- `bash` - Shell interpreter
174- `curl` - HTTP requests to OpenRouter API
175- `jq` - JSON parsing for API responses
176- `bc` - Performance metrics calculation
177
178### API access
179- OpenRouter API key (get one at [openrouter.ai](https://openrouter.ai))
180- Set as environment variable: `OPENROUTER_API_KEY`
181
182## Performance
183
184The tool displays performance metrics after each query:
185- **Model** - Which AI model processed the request  
186- **Provider** - The infrastructure provider that served it
187- **Response Time** - Total time in seconds
188- **Token Speed** - Generation speed in tokens/second
189
190Example output:
191```
192$ ask "What is 2+2?"
193
1944
195
196[inception/mercury-coder via Inception - 0.82s - 11.0 tok/s]
197```
198
199## Troubleshooting
200
201### API key not set
202```bash
203Error: OPENROUTER_API_KEY environment variable is not set
204# Solution: export OPENROUTER_API_KEY="your-key-here"
205```
206
207### Missing dependencies
208```bash
209# Check for required tools
210which curl jq bc
211
212# Install on macOS
213brew install jq bc
214
215# Install on Ubuntu/Debian
216sudo apt-get install jq bc
217```
218
219### No response or errors
220```bash
221# Test with verbose curl output
222curl -v https://openrouter.ai/api/v1/chat/completions \
223  -H "Authorization: Bearer $OPENROUTER_API_KEY" \
224  -H "Content-Type: application/json" \
225  -d '{"model":"google/gemini-2.5-flash","messages":[{"role":"user","content":"test"}]}'
226```
227
228## License
229
230MIT