1---
2title: AI Models and Pricing - Zed
3description: AI models available via Zed Pro including Claude, GPT-5.2, Gemini 3.1 Pro, and Grok. Pricing, context windows, and tool call support.
4---
5
6# Models
7
8Zed's plans offer hosted versions of major LLMs with higher rate limits than direct API access. Model availability is updated regularly. To use your own API keys instead, see [LLM Providers](./llm-providers.md). For general setup, see [Configuration](./configuration.md).
9
10> **Note:** Claude Opus models are not available on the [Student plan](./plans-and-usage.md#student).
11
12| Model | Provider | Token Type | Provider Price per 1M tokens | Zed Price per 1M tokens |
13| ---------------------- | --------- | ------------------- | ---------------------------- | ----------------------- |
14| Claude Opus 4.5 | Anthropic | Input | $5.00 | $5.50 |
15| | Anthropic | Output | $25.00 | $27.50 |
16| | Anthropic | Input - Cache Write | $6.25 | $6.875 |
17| | Anthropic | Input - Cache Read | $0.50 | $0.55 |
18| Claude Opus 4.6 | Anthropic | Input | $5.00 | $5.50 |
19| | Anthropic | Output | $25.00 | $27.50 |
20| | Anthropic | Input - Cache Write | $6.25 | $6.875 |
21| | Anthropic | Input - Cache Read | $0.50 | $0.55 |
22| Claude Sonnet 4.5 | Anthropic | Input | $3.00 | $3.30 |
23| | Anthropic | Output | $15.00 | $16.50 |
24| | Anthropic | Input - Cache Write | $3.75 | $4.125 |
25| | Anthropic | Input - Cache Read | $0.30 | $0.33 |
26| Claude Sonnet 4.6 | Anthropic | Input | $3.00 | $3.30 |
27| | Anthropic | Output | $15.00 | $16.50 |
28| | Anthropic | Input - Cache Write | $3.75 | $4.125 |
29| | Anthropic | Input - Cache Read | $0.30 | $0.33 |
30| Claude Haiku 4.5 | Anthropic | Input | $1.00 | $1.10 |
31| | Anthropic | Output | $5.00 | $5.50 |
32| | Anthropic | Input - Cache Write | $1.25 | $1.375 |
33| | Anthropic | Input - Cache Read | $0.10 | $0.11 |
34| GPT-5.2 | OpenAI | Input | $1.25 | $1.375 |
35| | OpenAI | Output | $10.00 | $11.00 |
36| | OpenAI | Cached Input | $0.125 | $0.1375 |
37| GPT-5.2 Codex | OpenAI | Input | $1.25 | $1.375 |
38| | OpenAI | Output | $10.00 | $11.00 |
39| | OpenAI | Cached Input | $0.125 | $0.1375 |
40| GPT-5 mini | OpenAI | Input | $0.25 | $0.275 |
41| | OpenAI | Output | $2.00 | $2.20 |
42| | OpenAI | Cached Input | $0.025 | $0.0275 |
43| GPT-5 nano | OpenAI | Input | $0.05 | $0.055 |
44| | OpenAI | Output | $0.40 | $0.44 |
45| | OpenAI | Cached Input | $0.005 | $0.0055 |
46| Gemini 3.1 Pro | Google | Input | $2.00 | $2.20 |
47| | Google | Output | $12.00 | $13.20 |
48| Gemini 3 Flash | Google | Input | $0.30 | $0.33 |
49| | Google | Output | $2.50 | $2.75 |
50| Grok 4 | X.ai | Input | $3.00 | $3.30 |
51| | X.ai | Output | $15.00 | $16.5 |
52| | X.ai | Cached Input | $0.75 | $0.825 |
53| Grok 4 Fast | X.ai | Input | $0.20 | $0.22 |
54| | X.ai | Output | $0.50 | $0.55 |
55| | X.ai | Cached Input | $0.05 | $0.055 |
56| Grok 4 (Non-Reasoning) | X.ai | Input | $0.20 | $0.22 |
57| | X.ai | Output | $0.50 | $0.55 |
58| | X.ai | Cached Input | $0.05 | $0.055 |
59| Grok Code Fast 1 | X.ai | Input | $0.20 | $0.22 |
60| | X.ai | Output | $1.50 | $1.65 |
61| | X.ai | Cached Input | $0.02 | $0.022 |
62
63## Recent Model Retirements
64
65As of February 19, 2026, Zed Pro serves newer model versions in place of the retired models below:
66
67- Claude Opus 4.1 → Claude Opus 4.5 or Claude Opus 4.6
68- Claude Sonnet 4 → Claude Sonnet 4.5 or Claude Sonnet 4.6
69- Claude Sonnet 3.7 (retired Feb 19) → Claude Sonnet 4.5 or Claude Sonnet 4.6
70- GPT-5.1 and GPT-5 → GPT-5.2 or GPT-5.2 Codex
71- Gemini 2.5 Pro → Gemini 3.1 Pro
72- Gemini 3 Pro → Gemini 3.1 Pro
73- Gemini 2.5 Flash → Gemini 3 Flash
74
75## Usage {#usage}
76
77Any usage of a Zed-hosted model will be billed at the Zed Price (rightmost column above). See [Plans and Usage](./plans-and-usage.md) for details on Zed's plans and limits for use of hosted models.
78
79> LLMs can enter unproductive loops that require user intervention. Monitor longer-running tasks and interrupt if needed.
80
81## Context Windows {#context-windows}
82
83A context window is the maximum span of text and code an LLM can consider at once, including both the input prompt and output generated by the model.
84
85| Model | Provider | Zed-Hosted Context Window |
86| ----------------- | --------- | ------------------------- |
87| Claude Opus 4.5 | Anthropic | 200k |
88| Claude Opus 4.6 | Anthropic | 1M |
89| Claude Sonnet 4.5 | Anthropic | 200k |
90| Claude Sonnet 4.6 | Anthropic | 1M |
91| Claude Haiku 4.5 | Anthropic | 200k |
92| GPT-5.2 | OpenAI | 400k |
93| GPT-5.2 Codex | OpenAI | 400k |
94| GPT-5 mini | OpenAI | 400k |
95| GPT-5 nano | OpenAI | 400k |
96| Gemini 3.1 Pro | Google | 200k |
97| Gemini 3 Flash | Google | 200k |
98
99> Context window limits for hosted Gemini 3.1 Pro/3 Pro/Flash may increase in future releases.
100
101Each Agent thread and text thread in Zed maintains its own context window.
102The more prompts, attached files, and responses included in a session, the larger the context window grows.
103
104Start a new thread for each distinct task to keep context focused.
105
106## Tool Calls {#tool-calls}
107
108Models can use [tools](./tools.md) to interface with your code, search the web, and perform other useful functions.