1---
2title: AI Models and Pricing - Zed
3description: AI models available via Zed Pro including Claude, GPT-5.2, Gemini 3.1 Pro, and Grok. Pricing, context windows, and tool call support.
4---
5
6# Models
7
8Zed's plans offer hosted versions of major LLMs with higher rate limits than direct API access. Model availability is updated regularly. To use your own API keys instead, see [LLM Providers](./llm-providers.md). For general setup, see [Configuration](./configuration.md).
9
10| Model | Provider | Token Type | Provider Price per 1M tokens | Zed Price per 1M tokens |
11| ---------------------- | --------- | ------------------- | ---------------------------- | ----------------------- |
12| Claude Opus 4.5 | Anthropic | Input | $5.00 | $5.50 |
13| | Anthropic | Output | $25.00 | $27.50 |
14| | Anthropic | Input - Cache Write | $6.25 | $6.875 |
15| | Anthropic | Input - Cache Read | $0.50 | $0.55 |
16| Claude Opus 4.6 | Anthropic | Input | $5.00 | $5.50 |
17| | Anthropic | Output | $25.00 | $27.50 |
18| | Anthropic | Input - Cache Write | $6.25 | $6.875 |
19| | Anthropic | Input - Cache Read | $0.50 | $0.55 |
20| Claude Sonnet 4.5 | Anthropic | Input | $3.00 | $3.30 |
21| | Anthropic | Output | $15.00 | $16.50 |
22| | Anthropic | Input - Cache Write | $3.75 | $4.125 |
23| | Anthropic | Input - Cache Read | $0.30 | $0.33 |
24| Claude Sonnet 4.6 | Anthropic | Input | $3.00 | $3.30 |
25| | Anthropic | Output | $15.00 | $16.50 |
26| | Anthropic | Input - Cache Write | $3.75 | $4.125 |
27| | Anthropic | Input - Cache Read | $0.30 | $0.33 |
28| Claude Haiku 4.5 | Anthropic | Input | $1.00 | $1.10 |
29| | Anthropic | Output | $5.00 | $5.50 |
30| | Anthropic | Input - Cache Write | $1.25 | $1.375 |
31| | Anthropic | Input - Cache Read | $0.10 | $0.11 |
32| GPT-5.2 | OpenAI | Input | $1.25 | $1.375 |
33| | OpenAI | Output | $10.00 | $11.00 |
34| | OpenAI | Cached Input | $0.125 | $0.1375 |
35| GPT-5.2 Codex | OpenAI | Input | $1.25 | $1.375 |
36| | OpenAI | Output | $10.00 | $11.00 |
37| | OpenAI | Cached Input | $0.125 | $0.1375 |
38| GPT-5 mini | OpenAI | Input | $0.25 | $0.275 |
39| | OpenAI | Output | $2.00 | $2.20 |
40| | OpenAI | Cached Input | $0.025 | $0.0275 |
41| GPT-5 nano | OpenAI | Input | $0.05 | $0.055 |
42| | OpenAI | Output | $0.40 | $0.44 |
43| | OpenAI | Cached Input | $0.005 | $0.0055 |
44| Gemini 3.1 Pro | Google | Input | $2.00 | $2.20 |
45| | Google | Output | $12.00 | $13.20 |
46| Gemini 3.1 Pro | Google | Input | $2.00 | $2.20 |
47| | Google | Output | $12.00 | $13.20 |
48| Gemini 3 Pro | Google | Input | $2.00 | $2.20 |
49| | Google | Output | $12.00 | $13.20 |
50| Gemini 3 Flash | Google | Input | $0.30 | $0.33 |
51| | Google | Output | $2.50 | $2.75 |
52| Grok 4 | X.ai | Input | $3.00 | $3.30 |
53| | X.ai | Output | $15.00 | $16.5 |
54| | X.ai | Cached Input | $0.75 | $0.825 |
55| Grok 4 Fast | X.ai | Input | $0.20 | $0.22 |
56| | X.ai | Output | $0.50 | $0.55 |
57| | X.ai | Cached Input | $0.05 | $0.055 |
58| Grok 4 (Non-Reasoning) | X.ai | Input | $0.20 | $0.22 |
59| | X.ai | Output | $0.50 | $0.55 |
60| | X.ai | Cached Input | $0.05 | $0.055 |
61| Grok Code Fast 1 | X.ai | Input | $0.20 | $0.22 |
62| | X.ai | Output | $1.50 | $1.65 |
63| | X.ai | Cached Input | $0.02 | $0.022 |
64
65## Recent Model Retirements
66
67As of February 19, 2026, Zed Pro serves newer model versions in place of the retired models below:
68
69- Claude Opus 4.1 → Claude Opus 4.5 or Claude Opus 4.6
70- Claude Sonnet 4 → Claude Sonnet 4.5 or Claude Sonnet 4.6
71- Claude Sonnet 3.7 (retired Feb 19) → Claude Sonnet 4.5 or Claude Sonnet 4.6
72- GPT-5.1 and GPT-5 → GPT-5.2 or GPT-5.2 Codex
73- Gemini 2.5 Pro → Gemini 3 Pro or Gemini 3.1 Pro
74- Gemini 2.5 Flash → Gemini 3 Flash
75
76## Usage {#usage}
77
78Any usage of a Zed-hosted model will be billed at the Zed Price (rightmost column above). See [Plans and Usage](./plans-and-usage.md) for details on Zed's plans and limits for use of hosted models.
79
80> LLMs can enter unproductive loops that require user intervention. Monitor longer-running tasks and interrupt if needed.
81
82## Context Windows {#context-windows}
83
84A context window is the maximum span of text and code an LLM can consider at once, including both the input prompt and output generated by the model.
85
86| Model | Provider | Zed-Hosted Context Window |
87| ----------------- | --------- | ------------------------- |
88| Claude Opus 4.5 | Anthropic | 200k |
89| Claude Opus 4.6 | Anthropic | 200k |
90| Claude Sonnet 4.5 | Anthropic | 200k |
91| Claude Sonnet 4.6 | Anthropic | 200k |
92| Claude Haiku 4.5 | Anthropic | 200k |
93| GPT-5.2 | OpenAI | 400k |
94| GPT-5.2 Codex | OpenAI | 400k |
95| GPT-5 mini | OpenAI | 400k |
96| GPT-5 nano | OpenAI | 400k |
97| Gemini 3.1 Pro | Google | 200k |
98| Gemini 3 Pro | Google | 200k |
99| Gemini 3 Flash | Google | 200k |
100
101> Context window limits for hosted Sonnet 4.5/4.6 and Gemini 3.1 Pro/3 Pro/Flash may increase in future releases.
102
103Each Agent thread and text thread in Zed maintains its own context window.
104The more prompts, attached files, and responses included in a session, the larger the context window grows.
105
106Start a new thread for each distinct task to keep context focused.
107
108## Tool Calls {#tool-calls}
109
110Models can use [tools](./tools.md) to interface with your code, search the web, and perform other useful functions.