1# Configuring the Assistant
2
3## Providers {#providers}
4
5The following providers are supported:
6
7- [Zed AI (Configured by default when signed in)](#zed-ai)
8- [Anthropic](#anthropic)
9- [GitHub Copilot Chat](#github-copilot-chat)
10- [Google AI](#google-ai)
11- [Ollama](#ollama)
12- [OpenAI](#openai)
13- [DeepSeek](#deepseek)
14- [LM Studio](#lmstudio)
15
16To configure different providers, run `assistant: show configuration` in the command palette, or click on the hamburger menu at the top-right of the assistant panel and select "Configure".
17
18To further customize providers, you can use `settings.json` to do that as follows:
19
20- [Configuring endpoints](#custom-endpoint)
21- [Configuring timeouts](#provider-timeout)
22- [Configuring default model](#default-model)
23- [Configuring alternative models for inline assists](#alternative-assists)
24
25### Zed AI {#zed-ai}
26
27A hosted service providing convenient and performant support for AI-enabled coding in Zed, powered by Anthropic's Claude 3.5 Sonnet and accessible just by signing in.
28
29### Anthropic {#anthropic}
30
31You can use Claude 3.5 Sonnet via [Zed AI](#zed-ai) for free. To use other Anthropic models you will need to configure it by providing your own API key.
32
331. Sign up for Anthropic and [create an API key](https://console.anthropic.com/settings/keys)
342. Make sure that your Anthropic account has credits
353. Open the configuration view (`assistant: show configuration`) and navigate to the Anthropic section
364. Enter your Anthropic API key
37
38Even if you pay for Claude Pro, you will still have to [pay for additional credits](https://console.anthropic.com/settings/plans) to use it via the API.
39
40Zed will also use the `ANTHROPIC_API_KEY` environment variable if it's defined.
41
42#### Anthropic Custom Models {#anthropic-custom-models}
43
44You can add custom models to the Anthropic provider by adding the following to your Zed `settings.json`:
45
46```json
47{
48 "language_models": {
49 "anthropic": {
50 "available_models": [
51 {
52 "name": "claude-3-5-sonnet-20240620",
53 "display_name": "Sonnet 2024-June",
54 "max_tokens": 128000,
55 "max_output_tokens": 2560,
56 "cache_configuration": {
57 "max_cache_anchors": 10,
58 "min_total_token": 10000,
59 "should_speculate": false
60 },
61 "tool_override": "some-model-that-supports-toolcalling"
62 }
63 ]
64 }
65 }
66}
67```
68
69Custom models will be listed in the model dropdown in the assistant panel.
70
71You can configure a model to use [extended thinking](https://docs.anthropic.com/en/docs/about-claude/models/extended-thinking-models) (if it supports it),
72by changing the mode in of your models configuration to `thinking`, for example:
73
74```json
75{
76 "name": "claude-3-7-sonnet-latest",
77 "display_name": "claude-3-7-sonnet-thinking",
78 "max_tokens": 200000,
79 "mode": {
80 "type": "thinking",
81 "budget_tokens": 4_096
82 }
83}
84```
85
86### GitHub Copilot Chat {#github-copilot-chat}
87
88You can use GitHub Copilot chat with the Zed assistant by choosing it via the model dropdown in the assistant panel.
89
90### Google AI {#google-ai}
91
92You can use Gemini 1.5 Pro/Flash with the Zed assistant by choosing it via the model dropdown in the assistant panel.
93
941. Go the Google AI Studio site and [create an API key](https://aistudio.google.com/app/apikey).
952. Open the configuration view (`assistant: show configuration`) and navigate to the Google AI section
963. Enter your Google AI API key and press enter.
97
98The Google AI API key will be saved in your keychain.
99
100Zed will also use the `GOOGLE_AI_API_KEY` environment variable if it's defined.
101
102#### Google AI custom models {#google-ai-custom-models}
103
104By default Zed will use `stable` versions of models, but you can use specific versions of models, including [experimental models](https://ai.google.dev/gemini-api/docs/models/experimental-models) with the Google AI provider by adding the following to your Zed `settings.json`:
105
106```json
107{
108 "language_models": {
109 "google": {
110 "available_models": [
111 {
112 "name": "gemini-1.5-flash-latest",
113 "display_name": "Gemini 1.5 Flash (Latest)",
114 "max_tokens": 1000000
115 }
116 ]
117 }
118 }
119}
120```
121
122Custom models will be listed in the model dropdown in the assistant panel.
123
124### Ollama {#ollama}
125
126Download and install Ollama from [ollama.com/download](https://ollama.com/download) (Linux or macOS) and ensure it's running with `ollama --version`.
127
1281. Download one of the [available models](https://ollama.com/models), for example, for `mistral`:
129
130 ```sh
131 ollama pull mistral
132 ```
133
1342. Make sure that the Ollama server is running. You can start it either via running Ollama.app (MacOS) or launching:
135
136 ```sh
137 ollama serve
138 ```
139
1403. In the assistant panel, select one of the Ollama models using the model dropdown.
141
142#### Ollama Context Length {#ollama-context}
143
144Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models. Zed API requests to Ollama include this as `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of ram are able to use most models out of the box. See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
145
146**Note**: Tokens counts displayed in the assistant panel are only estimates and will differ from the models native tokenizer.
147
148Depending on your hardware or use-case you may wish to limit or increase the context length for a specific model via settings.json:
149
150```json
151{
152 "language_models": {
153 "ollama": {
154 "api_url": "http://localhost:11434",
155 "available_models": [
156 {
157 "name": "qwen2.5-coder",
158 "display_name": "qwen 2.5 coder 32K",
159 "max_tokens": 32768
160 }
161 ]
162 }
163 }
164}
165```
166
167If you specify a context length that is too large for your hardware, Ollama will log an error. You can watch these logs by running: `tail -f ~/.ollama/logs/ollama.log` (MacOS) or `journalctl -u ollama -f` (Linux). Depending on the memory available on your machine, you may need to adjust the context length to a smaller value.
168
169You may also optionally specify a value for `keep_alive` for each available model. This can be an integer (seconds) or alternately a string duration like "5m", "10m", "1h", "1d", etc., For example `"keep_alive": "120s"` will allow the remote server to unload the model (freeing up GPU VRAM) after 120seconds.
170
171### OpenAI {#openai}
172
1731. Visit the OpenAI platform and [create an API key](https://platform.openai.com/account/api-keys)
1742. Make sure that your OpenAI account has credits
1753. Open the configuration view (`assistant: show configuration`) and navigate to the OpenAI section
1764. Enter your OpenAI API key
177
178The OpenAI API key will be saved in your keychain.
179
180Zed will also use the `OPENAI_API_KEY` environment variable if it's defined.
181
182#### OpenAI Custom Models {#openai-custom-models}
183
184The Zed Assistant comes pre-configured to use the latest version for common models (GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, GPT-4o, GPT-4o mini). If you wish to use alternate models, perhaps a preview release or a dated model release or you wish to control the request parameters you can do so by adding the following to your Zed `settings.json`:
185
186```json
187{
188 "language_models": {
189 "openai": {
190 "available_models": [
191 {
192 "name": "gpt-4o-2024-08-06",
193 "display_name": "GPT 4o Summer 2024",
194 "max_tokens": 128000
195 },
196 {
197 "name": "o1-mini",
198 "display_name": "o1-mini",
199 "max_tokens": 128000,
200 "max_completion_tokens": 20000
201 }
202 ]
203 "version": "1"
204 },
205 }
206}
207```
208
209You must provide the model's Context Window in the `max_tokens` parameter, this can be found [OpenAI Model Docs](https://platform.openai.com/docs/models). OpenAI `o1` models should set `max_completion_tokens` as well to avoid incurring high reasoning token costs. Custom models will be listed in the model dropdown in the assistant panel.
210
211### DeepSeek {#deepseek}
212
2131. Visit the DeepSeek platform and [create an API key](https://platform.deepseek.com/api_keys)
2142. Open the configuration view (`assistant: show configuration`) and navigate to the DeepSeek section
2153. Enter your DeepSeek API key
216
217The DeepSeek API key will be saved in your keychain.
218
219Zed will also use the `DEEPSEEK_API_KEY` environment variable if it's defined.
220
221#### DeepSeek Custom Models {#deepseek-custom-models}
222
223The Zed Assistant comes pre-configured to use the latest version for common models (DeepSeek Chat, DeepSeek Reasoner). If you wish to use alternate models or customize the API endpoint, you can do so by adding the following to your Zed `settings.json`:
224
225```json
226{
227 "language_models": {
228 "deepseek": {
229 "api_url": "https://api.deepseek.com",
230 "available_models": [
231 {
232 "name": "deepseek-chat",
233 "display_name": "DeepSeek Chat",
234 "max_tokens": 64000
235 },
236 {
237 "name": "deepseek-reasoner",
238 "display_name": "DeepSeek Reasoner",
239 "max_tokens": 64000,
240 "max_output_tokens": 4096
241 }
242 ]
243 }
244 }
245}
246```
247
248Custom models will be listed in the model dropdown in the assistant panel. You can also modify the `api_url` to use a custom endpoint if needed.
249
250### OpenAI API Compatible
251
252Zed supports using OpenAI compatible APIs by specifying a custom `endpoint` and `available_models` for the OpenAI provider.
253
254#### X.ai Grok
255
256Example configuration for using X.ai Grok with Zed:
257
258```json
259 "language_models": {
260 "openai": {
261 "api_url": "https://api.x.ai/v1",
262 "available_models": [
263 {
264 "name": "grok-beta",
265 "display_name": "X.ai Grok (Beta)",
266 "max_tokens": 131072
267 }
268 ],
269 "version": "1"
270 },
271 }
272```
273
274### Advanced configuration {#advanced-configuration}
275
276#### Example Configuration
277
278```json
279{
280 "assistant": {
281 "enabled": true,
282 "default_model": {
283 "provider": "zed.dev",
284 "model": "claude-3-5-sonnet"
285 },
286 "version": "2",
287 "button": true,
288 "default_width": 480,
289 "dock": "right"
290 }
291}
292```
293
294### LM Studio {#lmstudio}
295
2961. Download and install the latest version of LM Studio from https://lmstudio.ai/download
2972. In the app press ⌘/Ctrl + Shift + M and download at least one model, e.g. qwen2.5-coder-7b
298
299 You can also get models via the LM Studio CLI:
300
301 ```sh
302 lms get qwen2.5-coder-7b
303 ```
304
3053. Make sure the LM Studio API server by running:
306
307 ```sh
308 lms server start
309 ```
310
311Tip: Set [LM Studio as a login item](https://lmstudio.ai/docs/advanced/headless#run-the-llm-service-on-machine-login) to automate running the LM Studio server.
312
313#### Custom endpoints {#custom-endpoint}
314
315You can use a custom API endpoint for different providers, as long as it's compatible with the providers API structure.
316
317To do so, add the following to your Zed `settings.json`:
318
319```json
320{
321 "language_models": {
322 "some-provider": {
323 "api_url": "http://localhost:11434"
324 }
325 }
326}
327```
328
329Where `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
330
331#### Configuring the default model {#default-model}
332
333The default model can be set via the model dropdown in the assistant panel's top-right corner. Selecting a model saves it as the default.
334You can also manually edit the `default_model` object in your settings:
335
336```json
337{
338 "assistant": {
339 "version": "2",
340 "default_model": {
341 "provider": "zed.dev",
342 "model": "claude-3-5-sonnet"
343 }
344 }
345}
346```
347
348#### Configuring alternative models for inline assists {#alternative-assists}
349
350You can configure additional models that will be used to perform inline assists in parallel. When you do this,
351the inline assist UI will surface controls to cycle between the alternatives generated by each model. The models
352you specify here are always used in _addition_ to your default model. For example, the following configuration
353will generate two outputs for every assist. One with Claude 3.5 Sonnet, and one with GPT-4o.
354
355```json
356{
357 "assistant": {
358 "default_model": {
359 "provider": "zed.dev",
360 "model": "claude-3-5-sonnet"
361 },
362 "inline_alternatives": [
363 {
364 "provider": "zed.dev",
365 "model": "gpt-4o"
366 }
367 ],
368 "version": "2"
369 }
370}
371```
372
373#### Common Panel Settings
374
375| key | type | default | description |
376| -------------- | ------- | ------- | ------------------------------------------------------------------------------------- |
377| enabled | boolean | true | Setting this to `false` will completely disable the assistant |
378| button | boolean | true | Show the assistant icon in the status bar |
379| dock | string | "right" | The default dock position for the assistant panel. Can be ["left", "right", "bottom"] |
380| default_height | string | null | The pixel height of the assistant panel when docked to the bottom |
381| default_width | string | null | The pixel width of the assistant panel when docked to the left or right |