configuration.md

  1# Configuration
  2
  3There are various aspects about the Agent Panel that you can customize.
  4All of them can be seen by either visiting [the Configuring Zed page](../configuring-zed.md#agent) or by running the `zed: open default settings` action and searching for `"agent"`.
  5
  6Alternatively, you can also visit the panel's Settings view by running the `agent: open configuration` action or going to the top-right menu and hitting "Settings".
  7
  8## LLM Providers
  9
 10Zed supports multiple large language model providers.
 11Here's an overview of the supported providers and tool call support:
 12
 13| Provider                                        | Tool Use Supported                                                                                                                                                          |
 14| ----------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 15| [Amazon Bedrock](#amazon-bedrock)               | Depends on the model                                                                                                                                                        |
 16| [Anthropic](#anthropic)                         | ✅                                                                                                                                                                          |
 17| [DeepSeek](#deepseek)                           | ✅                                                                                                                                                                          |
 18| [GitHub Copilot Chat](#github-copilot-chat)     | For some models ([link](https://github.com/zed-industries/zed/blob/9e0330ba7d848755c9734bf456c716bddf0973f3/crates/language_models/src/provider/copilot_chat.rs#L189-L198)) |
 19| [Google AI](#google-ai)                         | ✅                                                                                                                                                                          |
 20| [LM Studio](#lmstudio)                          | ✅                                                                                                                                                                          |
 21| [Mistral](#mistral)                             | ✅                                                                                                                                                                          |
 22| [Ollama](#ollama)                               | ✅                                                                                                                                                                          |
 23| [OpenAI](#openai)                               | ✅                                                                                                                                                                          |
 24| [OpenAI API Compatible](#openai-api-compatible) | 🚫                                                                                                                                                                          |
 25| [OpenRouter](#openrouter)                       | ✅                                                                                                                                                                          |
 26| [Vercel](#vercel-v0)                            | ✅                                                                                                                                                                          |
 27| [xAI](#xai)                                     | ✅                                                                                                                                                                          |
 28
 29## Use Your Own Keys {#use-your-own-keys}
 30
 31While Zed offers hosted versions of models through [our various plans](./plans-and-usage.md), we're always happy to support users wanting to supply their own API keys.
 32Below, you can learn how to do that for each provider.
 33
 34> Using your own API keys is _free_—you do not need to subscribe to a Zed plan to use our AI features with your own keys.
 35
 36### Amazon Bedrock {#amazon-bedrock}
 37
 38> ✅ Supports tool use with models that support streaming tool use.
 39> More details can be found in the [Amazon Bedrock's Tool Use documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html).
 40
 41To use Amazon Bedrock's models, an AWS authentication is required.
 42Ensure your credentials have the following permissions set up:
 43
 44- `bedrock:InvokeModelWithResponseStream`
 45- `bedrock:InvokeModel`
 46- `bedrock:ConverseStream`
 47
 48Your IAM policy should look similar to:
 49
 50```json
 51{
 52  "Version": "2012-10-17",
 53  "Statement": [
 54    {
 55      "Effect": "Allow",
 56      "Action": [
 57        "bedrock:InvokeModel",
 58        "bedrock:InvokeModelWithResponseStream",
 59        "bedrock:ConverseStream"
 60      ],
 61      "Resource": "*"
 62    }
 63  ]
 64}
 65```
 66
 67With that done, choose one of the two authentication methods:
 68
 69#### Authentication via Named Profile (Recommended)
 70
 711. Ensure you have the AWS CLI installed and configured with a named profile
 722. Open your `settings.json` (`zed: open settings`) and include the `bedrock` key under `language_models` with the following settings:
 73   ```json
 74   {
 75     "language_models": {
 76       "bedrock": {
 77         "authentication_method": "named_profile",
 78         "region": "your-aws-region",
 79         "profile": "your-profile-name"
 80       }
 81     }
 82   }
 83   ```
 84
 85#### Authentication via Static Credentials
 86
 87While it's possible to configure through the Agent Panel settings UI by entering your AWS access key and secret directly, we recommend using named profiles instead for better security practices.
 88To do this:
 89
 901. Create an IAM User that you can assume in the [IAM Console](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/users).
 912. Create security credentials for that User, save them and keep them secure.
 923. Open the Agent Configuration with (`agent: open configuration`) and go to the Amazon Bedrock section
 934. Copy the credentials from Step 2 into the respective **Access Key ID**, **Secret Access Key**, and **Region** fields.
 94
 95#### Cross-Region Inference
 96
 97The Zed implementation of Amazon Bedrock uses [Cross-Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) for all the models and region combinations that support it.
 98With Cross-Region inference, you can distribute traffic across multiple AWS Regions, enabling higher throughput.
 99
100For example, if you use `Claude Sonnet 3.7 Thinking` from `us-east-1`, it may be processed across the US regions, namely: `us-east-1`, `us-east-2`, or `us-west-2`.
101Cross-Region inference requests are kept within the AWS Regions that are part of the geography where the data originally resides.
102For example, a request made within the US is kept within the AWS Regions in the US.
103
104Although the data remains stored only in the source Region, your input prompts and output results might move outside of your source Region during cross-Region inference.
105All data will be transmitted encrypted across Amazon's secure network.
106
107We will support Cross-Region inference for each of the models on a best-effort basis, please refer to the [Cross-Region Inference method Code](https://github.com/zed-industries/zed/blob/main/crates/bedrock/src/models.rs#L297).
108
109For the most up-to-date supported regions and models, refer to the [Supported Models and Regions for Cross Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html).
110
111### Anthropic {#anthropic}
112
113> ✅ Supports tool use
114
115You can use Anthropic models by choosing it via the model dropdown in the Agent Panel.
116
1171. Sign up for Anthropic and [create an API key](https://console.anthropic.com/settings/keys)
1182. Make sure that your Anthropic account has credits
1193. Open the settings view (`agent: open configuration`) and go to the Anthropic section
1204. Enter your Anthropic API key
121
122Even if you pay for Claude Pro, you will still have to [pay for additional credits](https://console.anthropic.com/settings/plans) to use it via the API.
123
124Zed will also use the `ANTHROPIC_API_KEY` environment variable if it's defined.
125
126#### Custom Models {#anthropic-custom-models}
127
128You can add custom models to the Anthropic provider by adding the following to your Zed `settings.json`:
129
130```json
131{
132  "language_models": {
133    "anthropic": {
134      "available_models": [
135        {
136          "name": "claude-3-5-sonnet-20240620",
137          "display_name": "Sonnet 2024-June",
138          "max_tokens": 128000,
139          "max_output_tokens": 2560,
140          "cache_configuration": {
141            "max_cache_anchors": 10,
142            "min_total_token": 10000,
143            "should_speculate": false
144          },
145          "tool_override": "some-model-that-supports-toolcalling"
146        }
147      ]
148    }
149  }
150}
151```
152
153Custom models will be listed in the model dropdown in the Agent Panel.
154
155You can configure a model to use [extended thinking](https://docs.anthropic.com/en/docs/about-claude/models/extended-thinking-models) (if it supports it) by changing the mode in your model's configuration to `thinking`, for example:
156
157```json
158{
159  "name": "claude-sonnet-4-latest",
160  "display_name": "claude-sonnet-4-thinking",
161  "max_tokens": 200000,
162  "mode": {
163    "type": "thinking",
164    "budget_tokens": 4_096
165  }
166}
167```
168
169### DeepSeek {#deepseek}
170
171> ✅ Supports tool use
172
1731. Visit the DeepSeek platform and [create an API key](https://platform.deepseek.com/api_keys)
1742. Open the settings view (`agent: open configuration`) and go to the DeepSeek section
1753. Enter your DeepSeek API key
176
177The DeepSeek API key will be saved in your keychain.
178
179Zed will also use the `DEEPSEEK_API_KEY` environment variable if it's defined.
180
181#### Custom Models {#deepseek-custom-models}
182
183The Zed agent comes pre-configured to use the latest version for common models (DeepSeek Chat, DeepSeek Reasoner).
184If you wish to use alternate models or customize the API endpoint, you can do so by adding the following to your Zed `settings.json`:
185
186```json
187{
188  "language_models": {
189    "deepseek": {
190      "api_url": "https://api.deepseek.com",
191      "available_models": [
192        {
193          "name": "deepseek-chat",
194          "display_name": "DeepSeek Chat",
195          "max_tokens": 64000
196        },
197        {
198          "name": "deepseek-reasoner",
199          "display_name": "DeepSeek Reasoner",
200          "max_tokens": 64000,
201          "max_output_tokens": 4096
202        }
203      ]
204    }
205  }
206}
207```
208
209Custom models will be listed in the model dropdown in the Agent Panel.
210You can also modify the `api_url` to use a custom endpoint if needed.
211
212### GitHub Copilot Chat {#github-copilot-chat}
213
214> ✅ Supports tool use in some cases.
215> Visit [the Copilot Chat code](https://github.com/zed-industries/zed/blob/9e0330ba7d848755c9734bf456c716bddf0973f3/crates/language_models/src/provider/copilot_chat.rs#L189-L198) for the supported subset.
216
217You can use GitHub Copilot Chat with the Zed agent by choosing it via the model dropdown in the Agent Panel.
218
2191. Open the settings view (`agent: open configuration`) and go to the GitHub Copilot Chat section
2202. Click on `Sign in to use GitHub Copilot`, follow the steps shown in the modal.
221
222Alternatively, you can provide an OAuth token via the `GH_COPILOT_TOKEN` environment variable.
223
224> **Note**: If you don't see specific models in the dropdown, you may need to enable them in your [GitHub Copilot settings](https://github.com/settings/copilot/features).
225
226To use Copilot Enterprise with Zed (for both agent and inline completions), you must configure your enterprise endpoint as described in [Configuring GitHub Copilot Enterprise](./edit-prediction.md#github-copilot-enterprise).
227
228### Google AI {#google-ai}
229
230> ✅ Supports tool use
231
232You can use Gemini models with the Zed agent by choosing it via the model dropdown in the Agent Panel.
233
2341. Go to the Google AI Studio site and [create an API key](https://aistudio.google.com/app/apikey).
2352. Open the settings view (`agent: open configuration`) and go to the Google AI section
2363. Enter your Google AI API key and press enter.
237
238The Google AI API key will be saved in your keychain.
239
240Zed will also use the `GEMINI_API_KEY` environment variable if it's defined. See [Using Gemini API keys](Using Gemini API keys) in the Gemini docs for more.
241
242#### Custom Models {#google-ai-custom-models}
243
244By default, Zed will use `stable` versions of models, but you can use specific versions of models, including [experimental models](https://ai.google.dev/gemini-api/docs/models/experimental-models). You can configure a model to use [thinking mode](https://ai.google.dev/gemini-api/docs/thinking) (if it supports it) by adding a `mode` configuration to your model. This is useful for controlling reasoning token usage and response speed. If not specified, Gemini will automatically choose the thinking budget.
245
246Here is an example of a custom Google AI model you could add to your Zed `settings.json`:
247
248```json
249{
250  "language_models": {
251    "google": {
252      "available_models": [
253        {
254          "name": "gemini-2.5-flash-preview-05-20",
255          "display_name": "Gemini 2.5 Flash (Thinking)",
256          "max_tokens": 1000000,
257          "mode": {
258            "type": "thinking",
259            "budget_tokens": 24000
260          }
261        }
262      ]
263    }
264  }
265}
266```
267
268Custom models will be listed in the model dropdown in the Agent Panel.
269
270### LM Studio {#lmstudio}
271
272> ✅ Supports tool use
273
2741. Download and install [the latest version of LM Studio](https://lmstudio.ai/download)
2752. In the app press `cmd/ctrl-shift-m` and download at least one model (e.g., qwen2.5-coder-7b). Alternatively, you can get models via the LM Studio CLI:
276
277   ```sh
278   lms get qwen2.5-coder-7b
279   ```
280
2813. Make sure the LM Studio API server is running by executing:
282
283   ```sh
284   lms server start
285   ```
286
287Tip: Set [LM Studio as a login item](https://lmstudio.ai/docs/advanced/headless#run-the-llm-service-on-machine-login) to automate running the LM Studio server.
288
289### Mistral {#mistral}
290
291> ✅ Supports tool use
292
2931. Visit the Mistral platform and [create an API key](https://console.mistral.ai/api-keys/)
2942. Open the configuration view (`agent: open configuration`) and navigate to the Mistral section
2953. Enter your Mistral API key
296
297The Mistral API key will be saved in your keychain.
298
299Zed will also use the `MISTRAL_API_KEY` environment variable if it's defined.
300
301#### Custom Models {#mistral-custom-models}
302
303The Zed agent comes pre-configured with several Mistral models (codestral-latest, mistral-large-latest, mistral-medium-latest, mistral-small-latest, open-mistral-nemo, and open-codestral-mamba).
304All the default models support tool use.
305If you wish to use alternate models or customize their parameters, you can do so by adding the following to your Zed `settings.json`:
306
307```json
308{
309  "language_models": {
310    "mistral": {
311      "api_url": "https://api.mistral.ai/v1",
312      "available_models": [
313        {
314          "name": "mistral-tiny-latest",
315          "display_name": "Mistral Tiny",
316          "max_tokens": 32000,
317          "max_output_tokens": 4096,
318          "max_completion_tokens": 1024,
319          "supports_tools": true,
320          "supports_images": false
321        }
322      ]
323    }
324  }
325}
326```
327
328Custom models will be listed in the model dropdown in the Agent Panel.
329
330### Ollama {#ollama}
331
332> ✅ Supports tool use
333
334Download and install Ollama from [ollama.com/download](https://ollama.com/download) (Linux or macOS) and ensure it's running with `ollama --version`.
335
3361. Download one of the [available models](https://ollama.com/models), for example, for `mistral`:
337
338   ```sh
339   ollama pull mistral
340   ```
341
3422. Make sure that the Ollama server is running. You can start it either via running Ollama.app (macOS) or launching:
343
344   ```sh
345   ollama serve
346   ```
347
3483. In the Agent Panel, select one of the Ollama models using the model dropdown.
349
350#### Ollama Context Length {#ollama-context}
351
352Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models.
353Zed API requests to Ollama include this as the `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of RAM are able to use most models out of the box.
354
355See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
356
357> **Note**: Token counts displayed in the Agent Panel are only estimates and will differ from the model's native tokenizer.
358
359Depending on your hardware or use-case you may wish to limit or increase the context length for a specific model via settings.json:
360
361```json
362{
363  "language_models": {
364    "ollama": {
365      "api_url": "http://localhost:11434",
366      "available_models": [
367        {
368          "name": "qwen2.5-coder",
369          "display_name": "qwen 2.5 coder 32K",
370          "max_tokens": 32768,
371          "supports_tools": true,
372          "supports_thinking": true,
373          "supports_images": true
374        }
375      ]
376    }
377  }
378}
379```
380
381If you specify a context length that is too large for your hardware, Ollama will log an error.
382You can watch these logs by running: `tail -f ~/.ollama/logs/ollama.log` (macOS) or `journalctl -u ollama -f` (Linux).
383Depending on the memory available on your machine, you may need to adjust the context length to a smaller value.
384
385You may also optionally specify a value for `keep_alive` for each available model.
386This can be an integer (seconds) or alternatively a string duration like "5m", "10m", "1h", "1d", etc.
387For example, `"keep_alive": "120s"` will allow the remote server to unload the model (freeing up GPU VRAM) after 120 seconds.
388
389The `supports_tools` option controls whether the model will use additional tools.
390If the model is tagged with `tools` in the Ollama catalog, this option should be supplied, and the built-in profiles `Ask` and `Write` can be used.
391If the model is not tagged with `tools` in the Ollama catalog, this option can still be supplied with the value `true`; however, be aware that only the `Minimal` built-in profile will work.
392
393The `supports_thinking` option controls whether the model will perform an explicit "thinking" (reasoning) pass before producing its final answer.
394If the model is tagged with `thinking` in the Ollama catalog, set this option and you can use it in Zed.
395
396The `supports_images` option enables the model's vision capabilities, allowing it to process images included in the conversation context.
397If the model is tagged with `vision` in the Ollama catalog, set this option and you can use it in Zed.
398
399### OpenAI {#openai}
400
401> ✅ Supports tool use
402
4031. Visit the OpenAI platform and [create an API key](https://platform.openai.com/account/api-keys)
4042. Make sure that your OpenAI account has credits
4053. Open the settings view (`agent: open configuration`) and go to the OpenAI section
4064. Enter your OpenAI API key
407
408The OpenAI API key will be saved in your keychain.
409
410Zed will also use the `OPENAI_API_KEY` environment variable if it's defined.
411
412#### Custom Models {#openai-custom-models}
413
414The Zed agent comes pre-configured to use the latest version for common models (GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, GPT-4o, GPT-4o mini).
415To use alternate models, perhaps a preview release or a dated model release, or if you wish to control the request parameters, you can do so by adding the following to your Zed `settings.json`:
416
417```json
418{
419  "language_models": {
420    "openai": {
421      "available_models": [
422        {
423          "name": "gpt-4o-2024-08-06",
424          "display_name": "GPT 4o Summer 2024",
425          "max_tokens": 128000
426        },
427        {
428          "name": "o1-mini",
429          "display_name": "o1-mini",
430          "max_tokens": 128000,
431          "max_completion_tokens": 20000
432        }
433      ],
434      "version": "1"
435    }
436  }
437}
438```
439
440You must provide the model's context window in the `max_tokens` parameter; this can be found in the [OpenAI model documentation](https://platform.openai.com/docs/models).
441
442OpenAI `o1` models should set `max_completion_tokens` as well to avoid incurring high reasoning token costs.
443Custom models will be listed in the model dropdown in the Agent Panel.
444
445### OpenAI API Compatible {#openai-api-compatible}
446
447Zed supports using [OpenAI compatible APIs](https://platform.openai.com/docs/api-reference/chat) by specifying a custom `api_url` and `available_models` for the OpenAI provider. This is useful for connecting to other hosted services (like Together AI, Anyscale, etc.) or local models.
448
449To configure a compatible API, you can add a custom API URL for OpenAI either via the UI (currently available only in Preview) or by editing your `settings.json`.
450
451For example, to connect to [Together AI](https://www.together.ai/) via the UI:
452
4531. Get an API key from your [Together AI account](https://api.together.ai/settings/api-keys).
4542. Go to the Agent Panel's settings view, click on the "Add Provider" button, and then on the "OpenAI" menu item
4553. Add the requested fields, such as `api_url`, `api_key`, available models, and others
456
457Alternatively, you can also add it via the `settings.json`:
458
459```json
460{
461  "language_models": {
462    "openai": {
463      "api_url": "https://api.together.xyz/v1",
464      "api_key": "YOUR_TOGETHER_AI_API_KEY",
465      "available_models": [
466        {
467          "name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
468          "display_name": "Together Mixtral 8x7B",
469          "max_tokens": 32768,
470          "supports_tools": true
471        }
472      ]
473    }
474  }
475}
476```
477
478### OpenRouter {#openrouter}
479
480> ✅ Supports tool use
481
482OpenRouter provides access to multiple AI models through a single API. It supports tool use for compatible models.
483
4841. Visit [OpenRouter](https://openrouter.ai) and create an account
4852. Generate an API key from your [OpenRouter keys page](https://openrouter.ai/keys)
4863. Open the settings view (`agent: open configuration`) and go to the OpenRouter section
4874. Enter your OpenRouter API key
488
489The OpenRouter API key will be saved in your keychain.
490
491Zed will also use the `OPENROUTER_API_KEY` environment variable if it's defined.
492
493#### Custom Models {#openrouter-custom-models}
494
495You can add custom models to the OpenRouter provider by adding the following to your Zed `settings.json`:
496
497```json
498{
499  "language_models": {
500    "open_router": {
501      "api_url": "https://openrouter.ai/api/v1",
502      "available_models": [
503        {
504          "name": "google/gemini-2.0-flash-thinking-exp",
505          "display_name": "Gemini 2.0 Flash (Thinking)",
506          "max_tokens": 200000,
507          "max_output_tokens": 8192,
508          "supports_tools": true,
509          "supports_images": true,
510          "mode": {
511            "type": "thinking",
512            "budget_tokens": 8000
513          }
514        }
515      ]
516    }
517  }
518}
519```
520
521The available configuration options for each model are:
522
523- `name` (required): The model identifier used by OpenRouter
524- `display_name` (optional): A human-readable name shown in the UI
525- `max_tokens` (required): The model's context window size
526- `max_output_tokens` (optional): Maximum tokens the model can generate
527- `max_completion_tokens` (optional): Maximum completion tokens
528- `supports_tools` (optional): Whether the model supports tool/function calling
529- `supports_images` (optional): Whether the model supports image inputs
530- `mode` (optional): Special mode configuration for thinking models
531
532You can find available models and their specifications on the [OpenRouter models page](https://openrouter.ai/models).
533
534Custom models will be listed in the model dropdown in the Agent Panel.
535
536### Vercel v0 {#vercel-v0}
537
538> ✅ Supports tool use
539
540[Vercel v0](https://vercel.com/docs/v0/api) is an expert model for generating full-stack apps, with framework-aware completions optimized for modern stacks like Next.js and Vercel.
541It supports text and image inputs and provides fast streaming responses.
542
543The v0 models are [OpenAI-compatible models](/#openai-api-compatible), but Vercel is listed as first-class provider in the panel's settings view.
544
545To start using it with Zed, ensure you have first created a [v0 API key](https://v0.dev/chat/settings/keys).
546Once you have it, paste it directly into the Vercel provider section in the panel's settings view.
547
548You should then find it as `v0-1.5-md` in the model dropdown in the Agent Panel.
549
550### xAI {#xai}
551
552> ✅ Supports tool use
553
554Zed has first-class support for [xAI](https://x.ai/) models. You can use your own API key to access Grok models.
555
5561. [Create an API key in the xAI Console](https://console.x.ai/team/default/api-keys)
5572. Open the settings view (`agent: open configuration`) and go to the **xAI** section
5583. Enter your xAI API key
559
560The xAI API key will be saved in your keychain. Zed will also use the `XAI_API_KEY` environment variable if it's defined.
561
562> **Note:** While the xAI API is OpenAI-compatible, Zed has first-class support for it as a dedicated provider. For the best experience, we recommend using the dedicated `x_ai` provider configuration instead of the [OpenAI API Compatible](#openai-api-compatible) method.
563
564#### Custom Models {#xai-custom-models}
565
566The Zed agent comes pre-configured with common Grok models. If you wish to use alternate models or customize their parameters, you can do so by adding the following to your Zed `settings.json`:
567
568```json
569{
570  "language_models": {
571    "x_ai": {
572      "api_url": "https://api.x.ai/v1",
573      "available_models": [
574        {
575          "name": "grok-1.5",
576          "display_name": "Grok 1.5",
577          "max_tokens": 131072,
578          "max_output_tokens": 8192
579        },
580        {
581          "name": "grok-1.5v",
582          "display_name": "Grok 1.5V (Vision)",
583          "max_tokens": 131072,
584          "max_output_tokens": 8192,
585          "supports_images": true
586        }
587      ]
588    }
589  }
590}
591```
592
593## Advanced Configuration {#advanced-configuration}
594
595### Custom Provider Endpoints {#custom-provider-endpoint}
596
597You can use a custom API endpoint for different providers, as long as it's compatible with the provider's API structure.
598To do so, add the following to your `settings.json`:
599
600```json
601{
602  "language_models": {
603    "some-provider": {
604      "api_url": "http://localhost:11434"
605    }
606  }
607}
608```
609
610Where `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
611
612### Default Model {#default-model}
613
614Zed's hosted LLM service sets `claude-sonnet-4` as the default model.
615However, you can change it either via the model dropdown in the Agent Panel's bottom-right corner or by manually editing the `default_model` object in your settings:
616
617```json
618{
619  "agent": {
620    "version": "2",
621    "default_model": {
622      "provider": "zed.dev",
623      "model": "gpt-4o"
624    }
625  }
626}
627```
628
629### Feature-specific Models {#feature-specific-models}
630
631If a feature-specific model is not set, it will fall back to using the default model, which is the one you set on the Agent Panel.
632
633You can configure the following feature-specific models:
634
635- Thread summary model: Used for generating thread summaries
636- Inline assistant model: Used for the inline assistant feature
637- Commit message model: Used for generating Git commit messages
638
639Example configuration:
640
641```json
642{
643  "agent": {
644    "version": "2",
645    "default_model": {
646      "provider": "zed.dev",
647      "model": "claude-sonnet-4"
648    },
649    "inline_assistant_model": {
650      "provider": "anthropic",
651      "model": "claude-3-5-sonnet"
652    },
653    "commit_message_model": {
654      "provider": "openai",
655      "model": "gpt-4o-mini"
656    },
657    "thread_summary_model": {
658      "provider": "google",
659      "model": "gemini-2.0-flash"
660    }
661  }
662}
663```
664
665### Alternative Models for Inline Assists {#alternative-assists}
666
667You can configure additional models that will be used to perform inline assists in parallel.
668When you do this, the inline assist UI will surface controls to cycle between the alternatives generated by each model.
669
670The models you specify here are always used in _addition_ to your [default model](#default-model).
671For example, the following configuration will generate two outputs for every assist.
672One with Claude 3.7 Sonnet, and one with GPT-4o.
673
674```json
675{
676  "agent": {
677    "default_model": {
678      "provider": "zed.dev",
679      "model": "claude-sonnet-4"
680    },
681    "inline_alternatives": [
682      {
683        "provider": "zed.dev",
684        "model": "gpt-4o"
685      }
686    ],
687    "version": "2"
688  }
689}
690```
691
692### Default View
693
694Use the `default_view` setting to set change the default view of the Agent Panel.
695You can choose between `thread` (the default) and `text_thread`:
696
697```json
698{
699  "agent": {
700    "default_view": "text_thread"
701  }
702}
703```
704
705### Edit Card
706
707Use the `expand_edit_card` setting to control whether edit cards show the full diff in the Agent Panel.
708It is set to `true` by default, but if set to false, the card's height is capped to a certain number of lines, requiring a click to be expanded.
709
710```json
711{
712  "agent": {
713    "expand_edit_card": "false"
714  }
715}
716```
717
718This setting is currently only available in Preview.
719It should be up in Stable by the next release.
720
721### Terminal Card
722
723Use the `expand_terminal_card` setting to control whether terminal cards show the command output in the Agent Panel.
724It is set to `true` by default, but if set to false, the card will be fully collapsed even while the command is running, requiring a click to be expanded.
725
726```json
727{
728  "agent": {
729    "expand_terminal_card": "false"
730  }
731}
732```
733
734This setting is currently only available in Preview.
735It should be up in Stable by the next release.