Detailed changes
@@ -1,10 +1,10 @@
# Agent Panel
-The Agent Panel allows you to interact with many LLMs and coding agents that can help with in various types of tasks, such as generating code, codebase understanding, and other general inquiries like writing emails, documentation, and more.
+The Agent Panel allows you to interact with many LLMs and coding agents that can help with various types of tasks, such as generating code, codebase understanding, and other general inquiries like writing emails, documentation, and more.
To open it, use the `agent: new thread` action in [the Command Palette](../getting-started.md#command-palette) or click the ✨ (sparkles) icon in the status bar.
-## Getting Started
+## Getting Started {#getting-started}
If you're using the Agent Panel for the first time, you need to have at least one LLM provider or external agent configured.
You can do that by:
@@ -28,7 +28,7 @@ From this point on, you can interact with the many supported features outlined b
By default, the Agent Panel uses Zed's first-party agent.
To change that, go to the plus button in the top-right of the Agent Panel and choose another option.
-You choose to create a new [Text Thread](./text-threads.md) or, if you have [external agents](./external-agents.md) connected, you can create new threads with them.
+You can choose to create a new [Text Thread](./text-threads.md) or, if you have [external agents](./external-agents.md) connected, you can create new threads with them.
### Editing Messages {#editing-messages}
@@ -37,7 +37,7 @@ You can click on the card that contains your message and re-submit it with an ad
### Checkpoints {#checkpoints}
-Every time the AI performs an edit, you should see a "Restore Checkpoint" button to the top of your message, allowing you to return your code base to the state it was in prior to that message.
+Every time the AI performs an edit, you should see a "Restore Checkpoint" button at the top of your message, allowing you to return your code base to the state it was in prior to that message.
The checkpoint button appears even if you interrupt the thread midway through an edit attempt, as this is likely a moment when you've identified that the agent is not heading in the right direction and you want to revert back.
@@ -78,7 +78,7 @@ Edit diffs also appear in individual buffers. If your active tab had edits made
## Adding Context {#adding-context}
-Although Zed's agent is very efficient at reading through your code base to autonomously pick up relevant context, manually adding whatever would be useful to fulfill your prompt is still very encouraged as a way to not only improve the AI's response quality but also to speed its response time up.
+Although Zed's agent is very efficient at reading through your code base to autonomously pick up relevant context, manually adding whatever would be useful to fulfill your prompt is still very encouraged as a way to not only improve the AI's response quality but also to speed up its response time.
In Zed's Agent Panel, all pieces of context are added as mentions in the panel's message editor.
You can type `@` to mention files, directories, symbols, previous threads, and rules files.
@@ -89,7 +89,7 @@ Copying images and pasting them in the panel's message editor is also supported.
### Token Usage {#token-usage}
-Zed surfaces how many tokens you are consuming for your currently active thread nearby the profile selector in the panel's message editor. Depending on how many pieces of context you add, your token consumption can grow rapidly.
+Zed surfaces how many tokens you are consuming for your currently active thread near the profile selector in the panel's message editor. Depending on how many pieces of context you add, your token consumption can grow rapidly.
Once you approach the model's context window, a banner appears below the message editor suggesting to start a new thread with the current one summarized and added as context.
You can also do this at any time with an ongoing thread via the "Agent Options" menu on the top right.
@@ -147,7 +147,7 @@ All [Zed's hosted models](./models.md) support tool calling out-of-the-box.
### MCP Servers {#mcp-servers}
-Similarly to the built-in tools, some models may not support all tools included in a given MCP Server. Zed's UI will inform about this via a warning icon that appears close to the model selector.
+Similarly to the built-in tools, some models may not support all tools included in a given MCP Server. Zed's UI will inform you about this via a warning icon that appears close to the model selector.
## Text Threads {#text-threads}
@@ -54,15 +54,33 @@ You can assign distinct and specific models for the following AI-powered feature
### Alternative Models for Inline Assists {#alternative-assists}
-The Inline Assist feature in particular has the capacity to perform multiple generations in parallel using different models.
-That is possible by assigning more than one model to it, taking the configuration shown above one step further.
+With the Inline Assistant in particular, you can send the same prompt to multiple models at once.
-When configured, the inline assist UI will surface controls to cycle between the outputs generated by each model.
+Here's how you can customize your `settings.json` to add this functionality:
+
+```json [settings]
+{
+ "agent": {
+ "default_model": {
+ "provider": "zed.dev",
+ "model": "claude-sonnet-4"
+ },
+ "inline_alternatives": [
+ {
+ "provider": "zed.dev",
+ "model": "gpt-4-mini"
+ }
+ ]
+ }
+}
+```
+
+When multiple models are configured, you'll see in the Inline Assistant UI buttons that allow you to cycle between outputs generated by each model.
The models you specify here are always used in _addition_ to your [default model](#default-model).
-For example, the following configuration will generate two outputs for every assist.
-One with Claude Sonnet 4 (the default model), and one with GPT-5-mini.
+For example, the following configuration will generate three outputs for every assist.
+One with Claude Sonnet 4 (the default model), another with GPT-5-mini, and another one with Gemini 2.5 Flash.
```json [settings]
{
@@ -75,6 +93,10 @@ One with Claude Sonnet 4 (the default model), and one with GPT-5-mini.
{
"provider": "zed.dev",
"model": "gpt-4-mini"
+ },
+ {
+ "provider": "zed.dev",
+ "model": "gemini-2.5-flash"
}
]
}
@@ -179,7 +201,7 @@ The default value is `false`.
### Message Editor Size
-Use the `message_editor_min_lines` setting to control minimum number of lines of height the agent message editor should have.
+Use the `message_editor_min_lines` setting to control the minimum number of lines of height the agent message editor should have.
It is set to `4` by default, and the max number of lines is always double of the minimum.
```json [settings]
@@ -232,7 +254,7 @@ It is set to `true` by default, but if set to false, the card will be fully coll
### Feedback Controls
-Control whether to display the thumbs up/down buttons at the bottom of each agent response, allowing to give Zed feedback about the agent's performance.
+Control whether to display the thumbs up/down buttons at the bottom of each agent response, allowing you to give Zed feedback about the agent's performance.
The default value is `true`.
```json [settings]
@@ -2,17 +2,104 @@
## Usage Overview
-Use `ctrl-enter` to open the Inline Assistant nearly anywhere you can enter text: editors, text threads, the rules library, channel notes, and even within the terminal panel.
+Use {#kb assistant::InlineAssist} to open the Inline Assistant nearly anywhere you can enter text: editors, text threads, the rules library, channel notes, and even within the terminal panel.
The Inline Assistant allows you to send the current selection (or the current line) to a language model and modify the selection with the language model's response.
-You can also perform multiple generation requests in parallel by pressing `ctrl-enter` with multiple cursors, or by pressing the same binding with a selection that spans multiple excerpts in a multibuffer.
+## Getting Started
-## Context
+If you're using the Inline Assistant for the first time, you need to have at least one LLM provider or external agent configured.
+You can do that by:
-Give the Inline Assistant context the same way you can in [the Agent Panel](./agent-panel.md), allowing you to provide additional instructions or rules for code transformations with @-mentions.
+1. [subscribing to our Pro plan](https://zed.dev/pricing), so you have access to our hosted models
+2. [using your own API keys](./llm-providers.md#use-your-own-keys), either from model providers like Anthropic or model gateways like OpenRouter.
-A useful pattern here is to create a thread in the Agent Panel, and then mention that thread with `@thread` in the Inline Assistant to include it as context.
+If you have already set up an LLM provider to interact with [the Agent Panel](./agent-panel.md#getting-started), then that will also work for the Inline Assistant.
+
+> Unlike the Agent Panel, though, the only exception at the moment is [external agents](./external-agents.md).
+> They currently can't be used for generating changes with the Inline Assistant.
+
+## Adding Context
+
+You can add context in the Inline Assistant the same way you can in [the Agent Panel](./agent-panel.md#adding-context):
+
+- @-mention files, directories, past threads, rules, and symbols
+- paste images that are copied on your clipboard
+
+Additionally, a useful pattern is to create a thread in the Agent Panel, and then mention it with `@thread` in the Inline Assistant to include it as context.
+That often serves as a way to more quickly iterate over a specific part of a change that happened in the context of a larger thread.
+
+## Parallel Generations
+
+There are two ways in which you can generate multiple changes at once with the Inline Assistant:
+
+### Multiple Cursors
+
+If you have a multiple cursor selection and hit {#kb assistant::InlineAssist}, you can shoot the same prompt for all cursor positions and get a change in all of them.
+
+This is particularly useful when working on excerpts in [a multibuffer context](../multibuffers.md).
+
+### Multiple Models
+
+You can use the Inline Assistant to send the same prompt to multiple models at once.
+
+Here's how you can customize your `settings.json` to add this functionality:
+
+```json [settings]
+{
+ "agent": {
+ "default_model": {
+ "provider": "zed.dev",
+ "model": "claude-sonnet-4"
+ },
+ "inline_alternatives": [
+ {
+ "provider": "zed.dev",
+ "model": "gpt-4-mini"
+ }
+ ]
+ }
+}
+```
+
+When multiple models are configured, you'll see in the Inline Assistant UI buttons that allow you to cycle between outputs generated by each model.
+
+The models you specify here are always used in _addition_ to your [default model](#default-model).
+
+For example, the following configuration will generate three outputs for every assist.
+One with Claude Sonnet 4 (the default model), another with GPT-5-mini, and another one with Gemini 2.5 Flash.
+
+```json [settings]
+{
+ "agent": {
+ "default_model": {
+ "provider": "zed.dev",
+ "model": "claude-sonnet-4"
+ },
+ "inline_alternatives": [
+ {
+ "provider": "zed.dev",
+ "model": "gpt-4-mini"
+ },
+ {
+ "provider": "zed.dev",
+ "model": "gemini-2.5-flash"
+ }
+ ]
+ }
+}
+```
+
+## Inline Assistant vs. Edit Prediction
+
+Users often ask what's the difference between these two AI-powered features in Zed, particularly because both of them involve getting inline LLM code completions.
+
+Here's how they are different:
+
+- The Inline Assistant is more similar to the Agent Panel as in you're still writing a prompt yourself and crafting context. It works from within the buffer and is mostly centered around your selections.
+- [Edit Predictions](./edit-prediction.md) is an AI-powered completion mechanism that intelligently suggests what you likely want to add next, based on context automatically gathered from your previous edits, recently visited files, and more.
+
+In summary, the key difference is that in the Inline Assistant, you're still manually prompting, whereas Edit Prediction will _automatically suggest_ edits to you.
## Prefilling Prompts