Cherry-pick fixes for v0.232.x preview patch (#53658)

Richard Feldman , Danilo Leal , Ben Brandt , Bennet Bo Fenner , Bennet Bo Fenner , Nathan Sobo , Anthony Eid , Mikayla Maki , Eric Holk , Anthony Eid , Max Brunsfeld , Cameron Mcloughlin , Cole Miller , Mikayla Maki , Anthony Eid , Katie Geer , and ojpro created

Cherry-picked PRs (in order applied):

1. https://github.com/zed-industries/zed/pull/53386
2. https://github.com/zed-industries/zed/pull/53400
3. https://github.com/zed-industries/zed/pull/53396
4. https://github.com/zed-industries/zed/pull/53428
5. https://github.com/zed-industries/zed/pull/53356
6. https://github.com/zed-industries/zed/pull/53215
7. https://github.com/zed-industries/zed/pull/53429
8. https://github.com/zed-industries/zed/pull/53458
9. https://github.com/zed-industries/zed/pull/53436
10. https://github.com/zed-industries/zed/pull/53451
11. https://github.com/zed-industries/zed/pull/53454
12. https://github.com/zed-industries/zed/pull/53419
13. https://github.com/zed-industries/zed/pull/53287
14. https://github.com/zed-industries/zed/pull/53521
15. https://github.com/zed-industries/zed/pull/53463
16. https://github.com/zed-industries/zed/pull/52848
17. https://github.com/zed-industries/zed/pull/53544
18. https://github.com/zed-industries/zed/pull/53556
19. https://github.com/zed-industries/zed/pull/53566
20. https://github.com/zed-industries/zed/pull/53579
21. https://github.com/zed-industries/zed/pull/53575
22. https://github.com/zed-industries/zed/pull/53550
23. https://github.com/zed-industries/zed/pull/53585
24. https://github.com/zed-industries/zed/pull/53510
25. https://github.com/zed-industries/zed/pull/53599
26. https://github.com/zed-industries/zed/pull/53099
27. #53662
28. #53660
29. #53657
30. #53654


Release Notes:

- N/A

---------

Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Anthony Eid <anthony@zed.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
Co-authored-by: Eric Holk <eric@zed.dev>
Co-authored-by: Anthony Eid <hello@anthonyeid.me>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Cameron Mcloughlin <cameron.studdstreet@gmail.com>
Co-authored-by: Cole Miller <cole@zed.dev>
Co-authored-by: Mikayla Maki <mikayla@zed.dev>
Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
Co-authored-by: Katie Geer <katie@zed.dev>
Co-authored-by: ojpro <contact@ojpro.me>

Change summary

Cargo.lock                                                       |  18 
assets/keymaps/vim.json                                          |   5 
crates/acp_thread/src/acp_thread.rs                              |  20 
crates/acp_thread/src/connection.rs                              |   2 
crates/agent/src/agent.rs                                        |   2 
crates/agent/src/native_agent_server.rs                          |   1 
crates/agent/src/thread.rs                                       |  31 
crates/agent/src/tools/streaming_edit_file_tool.rs               |   4 
crates/agent_servers/Cargo.toml                                  |   1 
crates/agent_servers/src/acp.rs                                  | 183 
crates/agent_servers/src/custom.rs                               |  10 
crates/agent_ui/Cargo.toml                                       |  10 
crates/agent_ui/src/agent_configuration/manage_profiles_modal.rs |   1 
crates/agent_ui/src/agent_panel.rs                               | 764 +
crates/agent_ui/src/agent_ui.rs                                  |   1 
crates/agent_ui/src/conversation_view.rs                         | 164 
crates/agent_ui/src/conversation_view/thread_view.rs             | 319 
crates/agent_ui/src/favorite_models.rs                           |   1 
crates/agent_ui/src/inline_assistant.rs                          | 178 
crates/agent_ui/src/thread_import.rs                             | 124 
crates/agent_ui/src/thread_metadata_store.rs                     | 656 +
crates/agent_ui/src/thread_worktree_archive.rs                   | 728 +
crates/agent_ui/src/threads_archive_view.rs                      |  62 
crates/collab/src/db.rs                                          |   1 
crates/collab/src/rpc.rs                                         |   1 
crates/fs/src/fake_git_repo.rs                                   |  33 
crates/git/src/repository.rs                                     |  98 
crates/git_ui/src/worktree_picker.rs                             |   1 
crates/language_model_core/src/request.rs                        |   4 
crates/project/src/agent_server_store.rs                         |  73 
crates/project/src/git_store.rs                                  |  42 
crates/project/src/lsp_store.rs                                  |   3 
crates/project/src/project.rs                                    |  77 
crates/project/src/worktree_store.rs                             |  11 
crates/project/tests/integration/ext_agent_tests.rs              |   4 
crates/project/tests/integration/extension_agent_tests.rs        |   4 
crates/proto/proto/worktree.proto                                |   2 
crates/recent_projects/src/recent_projects.rs                    |  57 
crates/recent_projects/src/remote_connections.rs                 |   8 
crates/recent_projects/src/remote_servers.rs                     |   2 
crates/remote/src/remote.rs                                      |   2 
crates/remote/src/remote_client.rs                               |  14 
crates/remote_connection/src/remote_connection.rs                | 155 
crates/remote_server/src/headless_project.rs                     |   3 
crates/remote_server/src/remote_editing_tests.rs                 |   6 
crates/settings_content/src/agent.rs                             |   2 
crates/settings_content/src/merge_from.rs                        |   1 
crates/sidebar/Cargo.toml                                        |  14 
crates/sidebar/src/sidebar.rs                                    | 721 +
crates/sidebar/src/sidebar_tests.rs                              | 480 +
crates/ui/src/components/ai/thread_item.rs                       |  16 
crates/util/src/disambiguate.rs                                  | 202 
crates/util/src/util.rs                                          |   1 
crates/workspace/src/multi_workspace.rs                          | 349 
crates/workspace/src/multi_workspace_tests.rs                    | 154 
crates/workspace/src/pane.rs                                     |  33 
crates/workspace/src/persistence.rs                              |  54 
crates/workspace/src/workspace.rs                                |  82 
crates/worktree/src/worktree.rs                                  |  16 
crates/zed/src/main.rs                                           | 104 
crates/zed/src/visual_test_runner.rs                             | 294 
crates/zed/src/zed.rs                                            |   1 
62 files changed, 5,097 insertions(+), 1,313 deletions(-)

Detailed changes

Cargo.lock 🔗

@@ -275,6 +275,7 @@ dependencies = [
  "nix 0.29.0",
  "project",
  "release_channel",
+ "remote",
  "reqwest_client",
  "serde",
  "serde_json",
@@ -331,6 +332,7 @@ dependencies = [
  "buffer_diff",
  "chrono",
  "client",
+ "clock",
  "cloud_api_types",
  "collections",
  "command_palette_hooks",
@@ -365,6 +367,7 @@ dependencies = [
  "markdown",
  "menu",
  "multi_buffer",
+ "node_runtime",
  "notifications",
  "ordered-float 2.10.1",
  "parking_lot",
@@ -377,6 +380,9 @@ dependencies = [
  "proto",
  "rand 0.9.2",
  "release_channel",
+ "remote",
+ "remote_connection",
+ "remote_server",
  "reqwest_client",
  "rope",
  "rules_library",
@@ -16077,21 +16083,33 @@ dependencies = [
  "agent_ui",
  "anyhow",
  "chrono",
+ "client",
+ "clock",
  "editor",
+ "extension",
  "fs",
  "git",
  "gpui",
+ "http_client",
+ "language",
  "language_model",
+ "log",
  "menu",
+ "node_runtime",
  "platform_title_bar",
  "pretty_assertions",
  "project",
  "prompt_store",
  "recent_projects",
+ "release_channel",
  "remote",
+ "remote_connection",
+ "remote_server",
+ "semver",
  "serde",
  "serde_json",
  "settings",
+ "smol",
  "theme",
  "theme_settings",
  "ui",

assets/keymaps/vim.json 🔗

@@ -1140,6 +1140,11 @@
       "g g": "menu::SelectFirst",
       "shift-g": "menu::SelectLast",
       "/": "agents_sidebar::FocusSidebarFilter",
+      "d d": "agent::RemoveSelectedThread",
+      "o": "agents_sidebar::NewThreadInGroup",
+      "shift-o": "agents_sidebar::NewThreadInGroup",
+      "] p": "multi_workspace::NextProject",
+      "[ p": "multi_workspace::PreviousProject",
       "z a": "editor::ToggleFold",
       "z c": "menu::SelectParent",
       "z o": "menu::SelectChild",

crates/acp_thread/src/acp_thread.rs 🔗

@@ -36,6 +36,18 @@ use util::path_list::PathList;
 use util::{ResultExt, get_default_system_shell_preferring_bash, paths::PathStyle};
 use uuid::Uuid;
 
+/// Returned when the model stops because it exhausted its output token budget.
+#[derive(Debug)]
+pub struct MaxOutputTokensError;
+
+impl std::fmt::Display for MaxOutputTokensError {
+    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "output token limit reached")
+    }
+}
+
+impl std::error::Error for MaxOutputTokensError {}
+
 /// Key used in ACP ToolCall meta to store the tool's programmatic name.
 /// This is a workaround since ACP's ToolCall doesn't have a dedicated name field.
 pub const TOOL_NAME_META_KEY: &str = "tool_name";
@@ -2262,17 +2274,15 @@ impl AcpThread {
                                         .is_some_and(|max| u.output_tokens >= max)
                                 });
 
-                            let message = if exceeded_max_output_tokens {
+                            if exceeded_max_output_tokens {
                                 log::error!(
                                     "Max output tokens reached. Usage: {:?}",
                                     this.token_usage
                                 );
-                                "Maximum output tokens reached"
                             } else {
                                 log::error!("Max tokens reached. Usage: {:?}", this.token_usage);
-                                "Maximum tokens reached"
-                            };
-                            return Err(anyhow!(message));
+                            }
+                            return Err(anyhow!(MaxOutputTokensError));
                         }
 
                         let canceled = matches!(r.stop_reason, acp::StopReason::Cancelled);

crates/acp_thread/src/connection.rs 🔗

@@ -117,7 +117,7 @@ pub trait AgentConnection {
         &self,
         _method: &acp::AuthMethodId,
         _cx: &App,
-    ) -> Option<SpawnInTerminal> {
+    ) -> Option<Task<Result<SpawnInTerminal>>> {
         None
     }
 

crates/agent/src/agent.rs 🔗

@@ -1355,6 +1355,7 @@ impl acp_thread::AgentModelSelector for NativeAgentModelSelector {
                 let provider = model.provider_id().0.to_string();
                 let model = model.id().0.to_string();
                 let enable_thinking = thread.read(cx).thinking_enabled();
+                let speed = thread.read(cx).speed();
                 settings
                     .agent
                     .get_or_insert_default()
@@ -1363,6 +1364,7 @@ impl acp_thread::AgentModelSelector for NativeAgentModelSelector {
                         model,
                         enable_thinking,
                         effort,
+                        speed,
                     });
             },
         );

crates/agent/src/native_agent_server.rs 🔗

@@ -97,6 +97,7 @@ fn model_id_to_selection(model_id: &acp::ModelId) -> LanguageModelSelection {
         model: model.to_owned(),
         enable_thinking: false,
         effort: None,
+        speed: None,
     }
 }
 

crates/agent/src/thread.rs 🔗

@@ -64,6 +64,18 @@ const TOOL_CANCELED_MESSAGE: &str = "Tool canceled by user";
 pub const MAX_TOOL_NAME_LENGTH: usize = 64;
 pub const MAX_SUBAGENT_DEPTH: u8 = 1;
 
+/// Returned when a turn is attempted but no language model has been selected.
+#[derive(Debug)]
+pub struct NoModelConfiguredError;
+
+impl std::fmt::Display for NoModelConfiguredError {
+    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "no language model configured")
+    }
+}
+
+impl std::error::Error for NoModelConfiguredError {}
+
 /// Context passed to a subagent thread for lifecycle management
 #[derive(Clone, Debug, Serialize, Deserialize)]
 pub struct SubagentContext {
@@ -1041,6 +1053,10 @@ impl Thread {
             .default_model
             .as_ref()
             .and_then(|model| model.effort.clone());
+        let speed = settings
+            .default_model
+            .as_ref()
+            .and_then(|model| model.speed);
         let (prompt_capabilities_tx, prompt_capabilities_rx) =
             watch::channel(Self::prompt_capabilities(model.as_deref()));
         Self {
@@ -1072,7 +1088,7 @@ impl Thread {
             model,
             summarization_model: None,
             thinking_enabled: enable_thinking,
-            speed: None,
+            speed,
             thinking_effort,
             prompt_capabilities_tx,
             prompt_capabilities_rx,
@@ -1768,7 +1784,9 @@ impl Thread {
         &mut self,
         cx: &mut Context<Self>,
     ) -> Result<mpsc::UnboundedReceiver<Result<ThreadEvent>>> {
-        let model = self.model().context("No language model configured")?;
+        let model = self
+            .model()
+            .ok_or_else(|| anyhow!(NoModelConfiguredError))?;
 
         log::info!("Thread::send called with model: {}", model.name().0);
         self.advance_prompt_id();
@@ -1892,7 +1910,10 @@ impl Thread {
             // mid-turn changes (e.g. the user switches model, toggles tools,
             // or changes profile) take effect between tool-call rounds.
             let (model, request) = this.update(cx, |this, cx| {
-                let model = this.model.clone().context("No language model configured")?;
+                let model = this
+                    .model
+                    .clone()
+                    .ok_or_else(|| anyhow!(NoModelConfiguredError))?;
                 this.refresh_turn_tools(cx);
                 let request = this.build_completion_request(intent, cx)?;
                 anyhow::Ok((model, request))
@@ -2738,7 +2759,9 @@ impl Thread {
                 completion_intent
             };
 
-        let model = self.model().context("No language model configured")?;
+        let model = self
+            .model()
+            .ok_or_else(|| anyhow!(NoModelConfiguredError))?;
         let tools = if let Some(turn) = self.running_turn.as_ref() {
             turn.tools
                 .iter()

crates/agent/src/tools/streaming_edit_file_tool.rs 🔗

@@ -189,9 +189,9 @@ pub enum StreamingEditFileToolOutput {
     },
     Error {
         error: String,
-        #[serde(default)]
+        #[serde(default, skip_serializing_if = "Option::is_none")]
         input_path: Option<PathBuf>,
-        #[serde(default)]
+        #[serde(default, skip_serializing_if = "String::is_empty")]
         diff: String,
     },
 }

crates/agent_servers/Cargo.toml 🔗

@@ -39,6 +39,7 @@ language_model.workspace = true
 log.workspace = true
 project.workspace = true
 release_channel.workspace = true
+remote.workspace = true
 reqwest_client = { workspace = true, optional = true }
 serde.workspace = true
 serde_json.workspace = true

crates/agent_servers/src/acp.rs 🔗

@@ -10,20 +10,20 @@ use collections::HashMap;
 use feature_flags::{AcpBetaFeatureFlag, FeatureFlagAppExt as _};
 use futures::AsyncBufReadExt as _;
 use futures::io::BufReader;
-use project::agent_server_store::AgentServerCommand;
+use project::agent_server_store::{AgentServerCommand, AgentServerStore};
 use project::{AgentId, Project};
+use remote::remote_client::Interactive;
 use serde::Deserialize;
 use settings::Settings as _;
-use task::{ShellBuilder, SpawnInTerminal};
-use util::ResultExt as _;
-use util::path_list::PathList;
-use util::process::Child;
-
 use std::path::PathBuf;
 use std::process::Stdio;
 use std::rc::Rc;
 use std::{any::Any, cell::RefCell};
+use task::{ShellBuilder, SpawnInTerminal};
 use thiserror::Error;
+use util::ResultExt as _;
+use util::path_list::PathList;
+use util::process::Child;
 
 use anyhow::{Context as _, Result};
 use gpui::{App, AppContext as _, AsyncApp, Entity, SharedString, Task, WeakEntity};
@@ -46,7 +46,7 @@ pub struct AcpConnection {
     connection: Rc<acp::ClientSideConnection>,
     sessions: Rc<RefCell<HashMap<acp::SessionId, AcpSession>>>,
     auth_methods: Vec<acp::AuthMethod>,
-    command: AgentServerCommand,
+    agent_server_store: WeakEntity<AgentServerStore>,
     agent_capabilities: acp::AgentCapabilities,
     default_mode: Option<acp::SessionModeId>,
     default_model: Option<acp::ModelId>,
@@ -167,6 +167,7 @@ pub async fn connect(
     agent_id: AgentId,
     project: Entity<Project>,
     command: AgentServerCommand,
+    agent_server_store: WeakEntity<AgentServerStore>,
     default_mode: Option<acp::SessionModeId>,
     default_model: Option<acp::ModelId>,
     default_config_options: HashMap<String, String>,
@@ -176,6 +177,7 @@ pub async fn connect(
         agent_id,
         project,
         command.clone(),
+        agent_server_store,
         default_mode,
         default_model,
         default_config_options,
@@ -192,23 +194,52 @@ impl AcpConnection {
         agent_id: AgentId,
         project: Entity<Project>,
         command: AgentServerCommand,
+        agent_server_store: WeakEntity<AgentServerStore>,
         default_mode: Option<acp::SessionModeId>,
         default_model: Option<acp::ModelId>,
         default_config_options: HashMap<String, String>,
         cx: &mut AsyncApp,
     ) -> Result<Self> {
+        let root_dir = project.read_with(cx, |project, cx| {
+            project
+                .default_path_list(cx)
+                .ordered_paths()
+                .next()
+                .cloned()
+        });
+        let original_command = command.clone();
+        let (path, args, env) = project
+            .read_with(cx, |project, cx| {
+                project.remote_client().and_then(|client| {
+                    let template = client
+                        .read(cx)
+                        .build_command_with_options(
+                            Some(command.path.display().to_string()),
+                            &command.args,
+                            &command.env.clone().into_iter().flatten().collect(),
+                            root_dir.as_ref().map(|path| path.display().to_string()),
+                            None,
+                            Interactive::No,
+                        )
+                        .log_err()?;
+                    Some((template.program, template.args, template.env))
+                })
+            })
+            .unwrap_or_else(|| {
+                (
+                    command.path.display().to_string(),
+                    command.args,
+                    command.env.unwrap_or_default(),
+                )
+            });
+
         let shell = cx.update(|cx| TerminalSettings::get(None, cx).shell.clone());
         let builder = ShellBuilder::new(&shell, cfg!(windows)).non_interactive();
-        let mut child =
-            builder.build_std_command(Some(command.path.display().to_string()), &command.args);
-        child.envs(command.env.iter().flatten());
-        if let Some(cwd) = project.update(cx, |project, cx| {
+        let mut child = builder.build_std_command(Some(path.clone()), &args);
+        child.envs(env.clone());
+        if let Some(cwd) = project.read_with(cx, |project, _cx| {
             if project.is_local() {
-                project
-                    .default_path_list(cx)
-                    .ordered_paths()
-                    .next()
-                    .cloned()
+                root_dir.as_ref()
             } else {
                 None
             }
@@ -220,11 +251,7 @@ impl AcpConnection {
         let stdout = child.stdout.take().context("Failed to take stdout")?;
         let stdin = child.stdin.take().context("Failed to take stdin")?;
         let stderr = child.stderr.take().context("Failed to take stderr")?;
-        log::debug!(
-            "Spawning external agent server: {:?}, {:?}",
-            command.path,
-            command.args
-        );
+        log::debug!("Spawning external agent server: {:?}, {:?}", path, args);
         log::trace!("Spawned (pid: {})", child.id());
 
         let sessions = Rc::new(RefCell::new(HashMap::default()));
@@ -342,13 +369,13 @@ impl AcpConnection {
 
         // TODO: Remove this override once Google team releases their official auth methods
         let auth_methods = if agent_id.0.as_ref() == GEMINI_ID {
-            let mut args = command.args.clone();
-            args.retain(|a| a != "--experimental-acp" && a != "--acp");
+            let mut gemini_args = original_command.args.clone();
+            gemini_args.retain(|a| a != "--experimental-acp" && a != "--acp");
             let value = serde_json::json!({
                 "label": "gemini /auth",
-                "command": command.path.to_string_lossy().into_owned(),
-                "args": args,
-                "env": command.env.clone().unwrap_or_default(),
+                "command": original_command.path.to_string_lossy(),
+                "args": gemini_args,
+                "env": original_command.env.unwrap_or_default(),
             });
             let meta = acp::Meta::from_iter([("terminal-auth".to_string(), value)]);
             vec![acp::AuthMethod::Agent(
@@ -362,7 +389,7 @@ impl AcpConnection {
         Ok(Self {
             id: agent_id,
             auth_methods,
-            command,
+            agent_server_store,
             connection,
             telemetry_id,
             sessions,
@@ -494,18 +521,12 @@ fn terminal_auth_task(
     agent_id: &AgentId,
     method: &acp::AuthMethodTerminal,
 ) -> SpawnInTerminal {
-    let mut args = command.args.clone();
-    args.extend(method.args.clone());
-
-    let mut env = command.env.clone().unwrap_or_default();
-    env.extend(method.env.clone());
-
     acp_thread::build_terminal_auth_task(
         terminal_auth_task_id(agent_id, &method.id),
         method.name.clone(),
         command.path.to_string_lossy().into_owned(),
-        args,
-        env,
+        command.args.clone(),
+        command.env.clone().unwrap_or_default(),
     )
 }
 
@@ -890,7 +911,7 @@ impl AgentConnection for AcpConnection {
         &self,
         method_id: &acp::AuthMethodId,
         cx: &App,
-    ) -> Option<SpawnInTerminal> {
+    ) -> Option<Task<Result<SpawnInTerminal>>> {
         let method = self
             .auth_methods
             .iter()
@@ -898,9 +919,28 @@ impl AgentConnection for AcpConnection {
 
         match method {
             acp::AuthMethod::Terminal(terminal) if cx.has_flag::<AcpBetaFeatureFlag>() => {
-                Some(terminal_auth_task(&self.command, &self.id, terminal))
+                let agent_id = self.id.clone();
+                let terminal = terminal.clone();
+                let store = self.agent_server_store.clone();
+                Some(cx.spawn(async move |cx| {
+                    let command = store
+                        .update(cx, |store, cx| {
+                            let agent = store
+                                .get_external_agent(&agent_id)
+                                .context("Agent server not found")?;
+                            anyhow::Ok(agent.get_command(
+                                terminal.args.clone(),
+                                HashMap::from_iter(terminal.env.clone()),
+                                &mut cx.to_async(),
+                            ))
+                        })?
+                        .context("Failed to get agent command")?
+                        .await?;
+                    Ok(terminal_auth_task(&command, &agent_id, &terminal))
+                }))
             }
-            _ => meta_terminal_auth_task(&self.id, method_id, method),
+            _ => meta_terminal_auth_task(&self.id, method_id, method)
+                .map(|task| Task::ready(Ok(task))),
         }
     }
 
@@ -1075,39 +1115,32 @@ mod tests {
     use super::*;
 
     #[test]
-    fn terminal_auth_task_reuses_command_and_merges_args_and_env() {
+    fn terminal_auth_task_builds_spawn_from_prebuilt_command() {
         let command = AgentServerCommand {
             path: "/path/to/agent".into(),
-            args: vec!["--acp".into(), "--verbose".into()],
+            args: vec!["--acp".into(), "--verbose".into(), "/auth".into()],
             env: Some(HashMap::from_iter([
                 ("BASE".into(), "1".into()),
-                ("SHARED".into(), "base".into()),
+                ("SHARED".into(), "override".into()),
+                ("EXTRA".into(), "2".into()),
             ])),
         };
-        let method = acp::AuthMethodTerminal::new("login", "Login")
-            .args(vec!["/auth".into()])
-            .env(std::collections::HashMap::from_iter([
-                ("EXTRA".into(), "2".into()),
-                ("SHARED".into(), "override".into()),
-            ]));
+        let method = acp::AuthMethodTerminal::new("login", "Login");
 
-        let terminal_auth_task = terminal_auth_task(&command, &AgentId::new("test-agent"), &method);
+        let task = terminal_auth_task(&command, &AgentId::new("test-agent"), &method);
 
+        assert_eq!(task.command.as_deref(), Some("/path/to/agent"));
+        assert_eq!(task.args, vec!["--acp", "--verbose", "/auth"]);
         assert_eq!(
-            terminal_auth_task.command.as_deref(),
-            Some("/path/to/agent")
-        );
-        assert_eq!(terminal_auth_task.args, vec!["--acp", "--verbose", "/auth"]);
-        assert_eq!(
-            terminal_auth_task.env,
+            task.env,
             HashMap::from_iter([
                 ("BASE".into(), "1".into()),
                 ("SHARED".into(), "override".into()),
                 ("EXTRA".into(), "2".into()),
             ])
         );
-        assert_eq!(terminal_auth_task.label, "Login");
-        assert_eq!(terminal_auth_task.command_label, "Login");
+        assert_eq!(task.label, "Login");
+        assert_eq!(task.command_label, "Login");
     }
 
     #[test]
@@ -1127,21 +1160,17 @@ mod tests {
             )])),
         );
 
-        let terminal_auth_task =
-            meta_terminal_auth_task(&AgentId::new("test-agent"), &method_id, &method)
-                .expect("expected legacy terminal auth task");
+        let task = meta_terminal_auth_task(&AgentId::new("test-agent"), &method_id, &method)
+            .expect("expected legacy terminal auth task");
 
+        assert_eq!(task.id.0, "external-agent-test-agent-legacy-login-login");
+        assert_eq!(task.command.as_deref(), Some("legacy-agent"));
+        assert_eq!(task.args, vec!["auth", "--interactive"]);
         assert_eq!(
-            terminal_auth_task.id.0,
-            "external-agent-test-agent-legacy-login-login"
-        );
-        assert_eq!(terminal_auth_task.command.as_deref(), Some("legacy-agent"));
-        assert_eq!(terminal_auth_task.args, vec!["auth", "--interactive"]);
-        assert_eq!(
-            terminal_auth_task.env,
+            task.env,
             HashMap::from_iter([("AUTH_MODE".into(), "interactive".into())])
         );
-        assert_eq!(terminal_auth_task.label, "legacy /auth");
+        assert_eq!(task.label, "legacy /auth");
     }
 
     #[test]
@@ -1186,30 +1215,30 @@ mod tests {
 
         let command = AgentServerCommand {
             path: "/path/to/agent".into(),
-            args: vec!["--acp".into()],
-            env: Some(HashMap::from_iter([("BASE".into(), "1".into())])),
+            args: vec!["--acp".into(), "/auth".into()],
+            env: Some(HashMap::from_iter([
+                ("BASE".into(), "1".into()),
+                ("AUTH_MODE".into(), "first-class".into()),
+            ])),
         };
 
-        let terminal_auth_task = match &method {
+        let task = match &method {
             acp::AuthMethod::Terminal(terminal) => {
                 terminal_auth_task(&command, &AgentId::new("test-agent"), terminal)
             }
             _ => unreachable!(),
         };
 
+        assert_eq!(task.command.as_deref(), Some("/path/to/agent"));
+        assert_eq!(task.args, vec!["--acp", "/auth"]);
         assert_eq!(
-            terminal_auth_task.command.as_deref(),
-            Some("/path/to/agent")
-        );
-        assert_eq!(terminal_auth_task.args, vec!["--acp", "/auth"]);
-        assert_eq!(
-            terminal_auth_task.env,
+            task.env,
             HashMap::from_iter([
                 ("BASE".into(), "1".into()),
                 ("AUTH_MODE".into(), "first-class".into()),
             ])
         );
-        assert_eq!(terminal_auth_task.label, "Login");
+        assert_eq!(task.label, "Login");
     }
 }
 

crates/agent_servers/src/custom.rs 🔗

@@ -360,17 +360,17 @@ impl AgentServer for CustomAgentServer {
                     let agent = store.get_external_agent(&agent_id).with_context(|| {
                         format!("Custom agent server `{}` is not registered", agent_id)
                     })?;
-                    anyhow::Ok(agent.get_command(
-                        extra_env,
-                        delegate.new_version_available,
-                        &mut cx.to_async(),
-                    ))
+                    if let Some(new_version_available_tx) = delegate.new_version_available {
+                        agent.set_new_version_available_tx(new_version_available_tx);
+                    }
+                    anyhow::Ok(agent.get_command(vec![], extra_env, &mut cx.to_async()))
                 })??
                 .await?;
             let connection = crate::acp::connect(
                 agent_id,
                 project,
                 command,
+                store.clone(),
                 default_mode,
                 default_model,
                 default_config_options,

crates/agent_ui/Cargo.toml 🔗

@@ -82,6 +82,8 @@ prompt_store.workspace = true
 proto.workspace = true
 rand.workspace = true
 release_channel.workspace = true
+remote.workspace = true
+remote_connection.workspace = true
 rope.workspace = true
 rules_library.workspace = true
 schemars.workspace = true
@@ -115,17 +117,23 @@ reqwest_client = { workspace = true, optional = true }
 acp_thread = { workspace = true, features = ["test-support"] }
 agent = { workspace = true, features = ["test-support"] }
 buffer_diff = { workspace = true, features = ["test-support"] }
-
+client = { workspace = true, features = ["test-support"] }
+clock = { workspace = true, features = ["test-support"] }
 db = { workspace = true, features = ["test-support"] }
 editor = { workspace = true, features = ["test-support"] }
 eval_utils.workspace = true
 gpui = { workspace = true, "features" = ["test-support"] }
+http_client = { workspace = true, features = ["test-support"] }
 indoc.workspace = true
 language = { workspace = true, "features" = ["test-support"] }
 languages = { workspace = true, features = ["test-support"] }
 language_model = { workspace = true, "features" = ["test-support"] }
+node_runtime = { workspace = true, features = ["test-support"] }
 pretty_assertions.workspace = true
 project = { workspace = true, features = ["test-support"] }
+remote = { workspace = true, features = ["test-support"] }
+remote_connection = { workspace = true, features = ["test-support"] }
+remote_server = { workspace = true, features = ["test-support"] }
 
 semver.workspace = true
 reqwest_client.workspace = true

crates/agent_ui/src/agent_panel.rs 🔗

@@ -56,8 +56,9 @@ use extension_host::ExtensionStore;
 use fs::Fs;
 use gpui::{
     Action, Animation, AnimationExt, AnyElement, App, AsyncWindowContext, ClipboardItem, Corner,
-    DismissEvent, Entity, EventEmitter, ExternalPaths, FocusHandle, Focusable, KeyContext, Pixels,
-    Subscription, Task, UpdateGlobal, WeakEntity, prelude::*, pulsating_between,
+    DismissEvent, Entity, EntityId, EventEmitter, ExternalPaths, FocusHandle, Focusable,
+    KeyContext, Pixels, Subscription, Task, UpdateGlobal, WeakEntity, prelude::*,
+    pulsating_between,
 };
 use language::LanguageRegistry;
 use language_model::LanguageModelRegistry;
@@ -65,6 +66,7 @@ use project::git_store::{GitStoreEvent, RepositoryEvent};
 use project::project_settings::ProjectSettings;
 use project::{Project, ProjectPath, Worktree, linked_worktree_short_name};
 use prompt_store::{PromptStore, UserPromptId};
+use remote::RemoteConnectionOptions;
 use rules_library::{RulesLibrary, open_rules_library};
 use settings::TerminalDockPosition;
 use settings::{Settings, update_settings_file};
@@ -77,8 +79,8 @@ use ui::{
 };
 use util::{ResultExt as _, debug_panic};
 use workspace::{
-    CollaboratorId, DraggedSelection, DraggedTab, OpenMode, OpenResult, PathList,
-    SerializedPathList, ToggleWorkspaceSidebar, ToggleZoom, Workspace, WorkspaceId,
+    CollaboratorId, DraggedSelection, DraggedTab, PathList, SerializedPathList,
+    ToggleWorkspaceSidebar, ToggleZoom, Workspace, WorkspaceId,
     dock::{DockPosition, Panel, PanelEvent},
 };
 use zed_actions::{
@@ -818,7 +820,7 @@ pub struct AgentPanel {
     agent_layout_onboarding_dismissed: AtomicBool,
     selected_agent: Agent,
     start_thread_in: StartThreadIn,
-    worktree_creation_status: Option<WorktreeCreationStatus>,
+    worktree_creation_status: Option<(EntityId, WorktreeCreationStatus)>,
     _thread_view_subscription: Option<Subscription>,
     _active_thread_focus_subscription: Option<Subscription>,
     _worktree_creation_task: Option<Task<()>>,
@@ -1861,6 +1863,7 @@ impl AgentPanel {
                                 model,
                                 enable_thinking,
                                 effort,
+                                speed: None,
                             })
                     });
                 }
@@ -1893,6 +1896,14 @@ impl AgentPanel {
         }
     }
 
+    pub fn conversation_views(&self) -> Vec<Entity<ConversationView>> {
+        self.active_conversation_view()
+            .into_iter()
+            .cloned()
+            .chain(self.background_threads.values().cloned())
+            .collect()
+    }
+
     pub fn active_thread_view(&self, cx: &App) -> Option<Entity<ThreadView>> {
         let server_view = self.active_conversation_view()?;
         server_view.read(cx).active_thread().cloned()
@@ -2785,6 +2796,7 @@ impl AgentPanel {
             PathBuf,
             futures::channel::oneshot::Receiver<Result<()>>,
         )>,
+        fs: Arc<dyn Fs>,
         cx: &mut AsyncWindowContext,
     ) -> Result<Vec<PathBuf>> {
         let mut created_paths: Vec<PathBuf> = Vec::new();
@@ -2793,10 +2805,10 @@ impl AgentPanel {
         let mut first_error: Option<anyhow::Error> = None;
 
         for (repo, new_path, receiver) in creation_infos {
+            repos_and_paths.push((repo.clone(), new_path.clone()));
             match receiver.await {
                 Ok(Ok(())) => {
-                    created_paths.push(new_path.clone());
-                    repos_and_paths.push((repo, new_path));
+                    created_paths.push(new_path);
                 }
                 Ok(Err(err)) => {
                     if first_error.is_none() {
@@ -2815,34 +2827,66 @@ impl AgentPanel {
             return Ok(created_paths);
         };
 
-        // Rollback all successfully created worktrees
-        let mut rollback_receivers = Vec::new();
+        // Rollback all attempted worktrees (both successful and failed)
+        let mut rollback_futures = Vec::new();
         for (rollback_repo, rollback_path) in &repos_and_paths {
-            if let Ok(receiver) = cx.update(|_, cx| {
-                rollback_repo.update(cx, |repo, _cx| {
-                    repo.remove_worktree(rollback_path.clone(), true)
+            let receiver = cx
+                .update(|_, cx| {
+                    rollback_repo.update(cx, |repo, _cx| {
+                        repo.remove_worktree(rollback_path.clone(), true)
+                    })
                 })
-            }) {
-                rollback_receivers.push((rollback_path.clone(), receiver));
-            }
+                .ok();
+
+            rollback_futures.push((rollback_path.clone(), receiver));
         }
+
         let mut rollback_failures: Vec<String> = Vec::new();
-        for (path, receiver) in rollback_receivers {
-            match receiver.await {
-                Ok(Ok(())) => {}
-                Ok(Err(rollback_err)) => {
-                    log::error!(
-                        "failed to rollback worktree at {}: {rollback_err}",
-                        path.display()
-                    );
-                    rollback_failures.push(format!("{}: {rollback_err}", path.display()));
+        for (path, receiver_opt) in rollback_futures {
+            let mut git_remove_failed = false;
+
+            if let Some(receiver) = receiver_opt {
+                match receiver.await {
+                    Ok(Ok(())) => {}
+                    Ok(Err(rollback_err)) => {
+                        log::error!(
+                            "git worktree remove failed for {}: {rollback_err}",
+                            path.display()
+                        );
+                        git_remove_failed = true;
+                    }
+                    Err(canceled) => {
+                        log::error!(
+                            "git worktree remove failed for {}: {canceled}",
+                            path.display()
+                        );
+                        git_remove_failed = true;
+                    }
                 }
-                Err(rollback_err) => {
-                    log::error!(
-                        "failed to rollback worktree at {}: {rollback_err}",
-                        path.display()
-                    );
-                    rollback_failures.push(format!("{}: {rollback_err}", path.display()));
+            } else {
+                log::error!(
+                    "failed to dispatch git worktree remove for {}",
+                    path.display()
+                );
+                git_remove_failed = true;
+            }
+
+            // `git worktree remove` normally removes this directory, but since
+            // `git worktree remove` failed (or wasn't dispatched), manually rm the directory.
+            if git_remove_failed {
+                if let Err(fs_err) = fs
+                    .remove_dir(
+                        &path,
+                        fs::RemoveOptions {
+                            recursive: true,
+                            ignore_if_not_exists: true,
+                        },
+                    )
+                    .await
+                {
+                    let msg = format!("{}: failed to remove directory: {fs_err}", path.display());
+                    log::error!("{}", msg);
+                    rollback_failures.push(msg);
                 }
             }
         }
@@ -2860,7 +2904,9 @@ impl AgentPanel {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        self.worktree_creation_status = Some(WorktreeCreationStatus::Error(message));
+        if let Some((_, status)) = &mut self.worktree_creation_status {
+            *status = WorktreeCreationStatus::Error(message);
+        }
         if matches!(self.active_view, ActiveView::Uninitialized) {
             let selected_agent = self.selected_agent.clone();
             self.new_agent_thread(selected_agent, window, cx);
@@ -2877,12 +2923,17 @@ impl AgentPanel {
     ) {
         if matches!(
             self.worktree_creation_status,
-            Some(WorktreeCreationStatus::Creating)
+            Some((_, WorktreeCreationStatus::Creating))
         ) {
             return;
         }
 
-        self.worktree_creation_status = Some(WorktreeCreationStatus::Creating);
+        let conversation_view_id = self
+            .active_conversation_view()
+            .map(|v| v.entity_id())
+            .unwrap_or_else(|| EntityId::from(0u64));
+        self.worktree_creation_status =
+            Some((conversation_view_id, WorktreeCreationStatus::Creating));
         cx.notify();
 
         let (git_repos, non_git_paths) = self.classify_worktrees(cx);
@@ -2932,6 +2983,24 @@ impl AgentPanel {
                 .absolute_path(&project_path, cx)
         });
 
+        let remote_connection_options = self.project.read(cx).remote_connection_options(cx);
+
+        if remote_connection_options.is_some() {
+            let is_disconnected = self
+                .project
+                .read(cx)
+                .remote_client()
+                .is_some_and(|client| client.read(cx).is_disconnected());
+            if is_disconnected {
+                self.set_worktree_creation_error(
+                    "Cannot create worktree: remote connection is not active".into(),
+                    window,
+                    cx,
+                );
+                return;
+            }
+        }
+
         let workspace = self.workspace.clone();
         let window_handle = window
             .window_handle()
@@ -3030,8 +3099,10 @@ impl AgentPanel {
                             }
                         };
 
+                    let fs = cx.update(|_, cx| <dyn Fs>::global(cx))?;
+
                     let created_paths =
-                        match Self::await_and_rollback_on_failure(creation_infos, cx).await {
+                        match Self::await_and_rollback_on_failure(creation_infos, fs, cx).await {
                             Ok(paths) => paths,
                             Err(err) => {
                                 this.update_in(cx, |this, window, cx| {
@@ -3058,25 +3129,21 @@ impl AgentPanel {
                 }
             };
 
-            let app_state = match workspace.upgrade() {
-                Some(workspace) => cx.update(|_, cx| workspace.read(cx).app_state().clone())?,
-                None => {
-                    this.update_in(cx, |this, window, cx| {
-                        this.set_worktree_creation_error(
-                            "Workspace no longer available".into(),
-                            window,
-                            cx,
-                        );
-                    })?;
-                    return anyhow::Ok(());
-                }
-            };
+            if workspace.upgrade().is_none() {
+                this.update_in(cx, |this, window, cx| {
+                    this.set_worktree_creation_error(
+                        "Workspace no longer available".into(),
+                        window,
+                        cx,
+                    );
+                })?;
+                return anyhow::Ok(());
+            }
 
             let this_for_error = this.clone();
             if let Err(err) = Self::open_worktree_workspace_and_start_thread(
                 this,
                 all_paths,
-                app_state,
                 window_handle,
                 active_file_path,
                 path_remapping,
@@ -3084,6 +3151,7 @@ impl AgentPanel {
                 has_non_git,
                 content,
                 selected_agent,
+                remote_connection_options,
                 cx,
             )
             .await
@@ -3109,7 +3177,6 @@ impl AgentPanel {
     async fn open_worktree_workspace_and_start_thread(
         this: WeakEntity<Self>,
         all_paths: Vec<PathBuf>,
-        app_state: Arc<workspace::AppState>,
         window_handle: Option<gpui::WindowHandle<workspace::MultiWorkspace>>,
         active_file_path: Option<PathBuf>,
         path_remapping: Vec<(PathBuf, PathBuf)>,
@@ -3117,25 +3184,39 @@ impl AgentPanel {
         has_non_git: bool,
         content: Vec<acp::ContentBlock>,
         selected_agent: Option<Agent>,
+        remote_connection_options: Option<RemoteConnectionOptions>,
         cx: &mut AsyncWindowContext,
     ) -> Result<()> {
-        let OpenResult {
-            window: new_window_handle,
-            workspace: new_workspace,
-            ..
-        } = cx
-            .update(|_window, cx| {
-                Workspace::new_local(
-                    all_paths,
-                    app_state,
-                    window_handle,
-                    None,
+        let window_handle = window_handle
+            .ok_or_else(|| anyhow!("No window handle available for workspace creation"))?;
+
+        let (workspace_task, modal_workspace) =
+            window_handle.update(cx, |multi_workspace, window, cx| {
+                let path_list = PathList::new(&all_paths);
+                let active_workspace = multi_workspace.workspace().clone();
+                let modal_workspace = active_workspace.clone();
+
+                let task = multi_workspace.find_or_create_workspace(
+                    path_list,
+                    remote_connection_options,
                     None,
-                    OpenMode::Add,
+                    move |connection_options, window, cx| {
+                        remote_connection::connect_with_modal(
+                            &active_workspace,
+                            connection_options,
+                            window,
+                            cx,
+                        )
+                    },
+                    window,
                     cx,
-                )
-            })?
-            .await?;
+                );
+                (task, modal_workspace)
+            })?;
+
+        let result = workspace_task.await;
+        remote_connection::dismiss_connection_modal(&modal_workspace, cx);
+        let new_workspace = result?;
 
         let panels_task = new_workspace.update(cx, |workspace, _cx| workspace.take_panels_task());
 
@@ -3171,7 +3252,7 @@ impl AgentPanel {
             auto_submit: true,
         };
 
-        new_window_handle.update(cx, |_multi_workspace, window, cx| {
+        window_handle.update(cx, |_multi_workspace, window, cx| {
             new_workspace.update(cx, |workspace, cx| {
                 if has_non_git {
                     let toast_id = workspace::notifications::NotificationId::unique::<AgentPanel>();
@@ -3256,7 +3337,7 @@ impl AgentPanel {
             });
         })?;
 
-        new_window_handle.update(cx, |multi_workspace, window, cx| {
+        window_handle.update(cx, |multi_workspace, window, cx| {
             multi_workspace.activate(new_workspace.clone(), window, cx);
 
             new_workspace.update(cx, |workspace, cx| {
@@ -3373,7 +3454,7 @@ impl Panel for AgentPanel {
             && matches!(self.active_view, ActiveView::Uninitialized)
             && !matches!(
                 self.worktree_creation_status,
-                Some(WorktreeCreationStatus::Creating)
+                Some((_, WorktreeCreationStatus::Creating))
             )
         {
             let selected_agent = self.selected_agent.clone();
@@ -3613,13 +3694,19 @@ impl AgentPanel {
         !self.project.read(cx).repositories(cx).is_empty()
     }
 
+    fn is_active_view_creating_worktree(&self, _cx: &App) -> bool {
+        match &self.worktree_creation_status {
+            Some((view_id, WorktreeCreationStatus::Creating)) => {
+                self.active_conversation_view().map(|v| v.entity_id()) == Some(*view_id)
+            }
+            _ => false,
+        }
+    }
+
     fn render_start_thread_in_selector(&self, cx: &mut Context<Self>) -> impl IntoElement {
         let focus_handle = self.focus_handle(cx);
 
-        let is_creating = matches!(
-            self.worktree_creation_status,
-            Some(WorktreeCreationStatus::Creating)
-        );
+        let is_creating = self.is_active_view_creating_worktree(cx);
 
         let trigger_parts = self
             .start_thread_in
@@ -3672,10 +3759,7 @@ impl AgentPanel {
     }
 
     fn render_new_worktree_branch_selector(&self, cx: &mut Context<Self>) -> impl IntoElement {
-        let is_creating = matches!(
-            self.worktree_creation_status,
-            Some(WorktreeCreationStatus::Creating)
-        );
+        let is_creating = self.is_active_view_creating_worktree(cx);
 
         let project_ref = self.project.read(cx);
         let trigger_parts = self
@@ -4143,7 +4227,11 @@ impl AgentPanel {
     }
 
     fn render_worktree_creation_status(&self, cx: &mut Context<Self>) -> Option<AnyElement> {
-        let status = self.worktree_creation_status.as_ref()?;
+        let (view_id, status) = self.worktree_creation_status.as_ref()?;
+        let active_view_id = self.active_conversation_view().map(|v| v.entity_id());
+        if active_view_id != Some(*view_id) {
+            return None;
+        }
         match status {
             WorktreeCreationStatus::Creating => Some(
                 h_flex()
@@ -4683,10 +4771,11 @@ impl AgentPanel {
     ///
     /// This is a test-only helper for visual tests.
     pub fn worktree_creation_status_for_tests(&self) -> Option<&WorktreeCreationStatus> {
-        self.worktree_creation_status.as_ref()
+        self.worktree_creation_status.as_ref().map(|(_, s)| s)
     }
 
-    /// Sets the worktree creation status directly.
+    /// Sets the worktree creation status directly, associating it with the
+    /// currently active conversation view.
     ///
     /// This is a test-only helper for visual tests that need to show the
     /// "Creating worktree…" spinner or error banners.
@@ -4695,7 +4784,13 @@ impl AgentPanel {
         status: Option<WorktreeCreationStatus>,
         cx: &mut Context<Self>,
     ) {
-        self.worktree_creation_status = status;
+        self.worktree_creation_status = status.map(|s| {
+            let view_id = self
+                .active_conversation_view()
+                .map(|v| v.entity_id())
+                .unwrap_or_else(|| EntityId::from(0u64));
+            (view_id, s)
+        });
         cx.notify();
     }
 
@@ -4736,6 +4831,7 @@ mod tests {
     };
     use acp_thread::{StubAgentConnection, ThreadStatus};
     use agent_servers::CODEX_ID;
+    use feature_flags::FeatureFlagAppExt;
     use fs::FakeFs;
     use gpui::{TestAppContext, VisualTestContext};
     use project::Project;
@@ -4752,7 +4848,7 @@ mod tests {
             language_model::LanguageModelRegistry::test(cx);
         });
 
-        // --- Create a MultiWorkspace window with two workspaces ---
+        // Create a MultiWorkspace window with two workspaces.
         let fs = FakeFs::new(cx.executor());
         let project_a = Project::test(fs.clone(), [], cx).await;
         let project_b = Project::test(fs, [], cx).await;
@@ -4781,7 +4877,7 @@ mod tests {
 
         let cx = &mut VisualTestContext::from_window(multi_workspace.into(), cx);
 
-        // --- Set up workspace A: with an active thread ---
+        // Set up workspace A: with an active thread.
         let panel_a = workspace_a.update_in(cx, |workspace, window, cx| {
             cx.new(|cx| AgentPanel::new(workspace, None, window, cx))
         });
@@ -4807,7 +4903,7 @@ mod tests {
 
         let agent_type_a = panel_a.read_with(cx, |panel, _cx| panel.selected_agent.clone());
 
-        // --- Set up workspace B: ClaudeCode, no active thread ---
+        // Set up workspace B: ClaudeCode, no active thread.
         let panel_b = workspace_b.update_in(cx, |workspace, window, cx| {
             cx.new(|cx| AgentPanel::new(workspace, None, window, cx))
         });
@@ -4818,12 +4914,12 @@ mod tests {
             };
         });
 
-        // --- Serialize both panels ---
+        // Serialize both panels.
         panel_a.update(cx, |panel, cx| panel.serialize(cx));
         panel_b.update(cx, |panel, cx| panel.serialize(cx));
         cx.run_until_parked();
 
-        // --- Load fresh panels for each workspace and verify independent state ---
+        // Load fresh panels for each workspace and verify independent state.
         let async_cx = cx.update(|window, cx| window.to_async(cx));
         let loaded_a = AgentPanel::load(workspace_a.downgrade(), async_cx)
             .await
@@ -5942,7 +6038,8 @@ mod tests {
 
         // Simulate worktree creation in progress and reset to Uninitialized
         panel.update_in(cx, |panel, window, cx| {
-            panel.worktree_creation_status = Some(WorktreeCreationStatus::Creating);
+            panel.worktree_creation_status =
+                Some((EntityId::from(0u64), WorktreeCreationStatus::Creating));
             panel.active_view = ActiveView::Uninitialized;
             Panel::set_active(panel, true, window, cx);
             assert!(
@@ -6388,7 +6485,7 @@ mod tests {
                 let metadata = store
                     .entry(session_id)
                     .unwrap_or_else(|| panic!("{label} thread metadata should exist"));
-                metadata.folder_paths.clone()
+                metadata.folder_paths().clone()
             });
             let mut sorted = metadata_paths.ordered_paths().cloned().collect::<Vec<_>>();
             sorted.sort();
@@ -6637,4 +6734,499 @@ mod tests {
             );
         });
     }
+
+    #[gpui::test]
+    async fn test_rollback_all_succeed_returns_ok(cx: &mut TestAppContext) {
+        init_test(cx);
+        let fs = FakeFs::new(cx.executor());
+        cx.update(|cx| {
+            cx.update_flags(true, vec!["agent-v2".to_string()]);
+            agent::ThreadStore::init_global(cx);
+            language_model::LanguageModelRegistry::test(cx);
+            <dyn fs::Fs>::set_global(fs.clone(), cx);
+        });
+
+        fs.insert_tree(
+            "/project",
+            json!({
+                ".git": {},
+                "src": { "main.rs": "fn main() {}" }
+            }),
+        )
+        .await;
+
+        let project = Project::test(fs.clone(), [Path::new("/project")], cx).await;
+        cx.executor().run_until_parked();
+
+        let repository = project.read_with(cx, |project, cx| {
+            project.repositories(cx).values().next().unwrap().clone()
+        });
+
+        let multi_workspace =
+            cx.add_window(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+
+        let path_a = PathBuf::from("/worktrees/branch/project_a");
+        let path_b = PathBuf::from("/worktrees/branch/project_b");
+
+        let (sender_a, receiver_a) = futures::channel::oneshot::channel::<Result<()>>();
+        let (sender_b, receiver_b) = futures::channel::oneshot::channel::<Result<()>>();
+        sender_a.send(Ok(())).unwrap();
+        sender_b.send(Ok(())).unwrap();
+
+        let creation_infos = vec![
+            (repository.clone(), path_a.clone(), receiver_a),
+            (repository.clone(), path_b.clone(), receiver_b),
+        ];
+
+        let fs_clone = fs.clone();
+        let result = multi_workspace
+            .update(cx, |_, window, cx| {
+                window.spawn(cx, async move |cx| {
+                    AgentPanel::await_and_rollback_on_failure(creation_infos, fs_clone, cx).await
+                })
+            })
+            .unwrap()
+            .await;
+
+        let paths = result.expect("all succeed should return Ok");
+        assert_eq!(paths, vec![path_a, path_b]);
+    }
+
+    #[gpui::test]
+    async fn test_rollback_on_failure_attempts_all_worktrees(cx: &mut TestAppContext) {
+        init_test(cx);
+        let fs = FakeFs::new(cx.executor());
+        cx.update(|cx| {
+            cx.update_flags(true, vec!["agent-v2".to_string()]);
+            agent::ThreadStore::init_global(cx);
+            language_model::LanguageModelRegistry::test(cx);
+            <dyn fs::Fs>::set_global(fs.clone(), cx);
+        });
+
+        fs.insert_tree(
+            "/project",
+            json!({
+                ".git": {},
+                "src": { "main.rs": "fn main() {}" }
+            }),
+        )
+        .await;
+
+        let project = Project::test(fs.clone(), [Path::new("/project")], cx).await;
+        cx.executor().run_until_parked();
+
+        let repository = project.read_with(cx, |project, cx| {
+            project.repositories(cx).values().next().unwrap().clone()
+        });
+
+        // Actually create a worktree so it exists in FakeFs for rollback to find.
+        let success_path = PathBuf::from("/worktrees/branch/project");
+        cx.update(|cx| {
+            repository.update(cx, |repo, _| {
+                repo.create_worktree(
+                    git::repository::CreateWorktreeTarget::NewBranch {
+                        branch_name: "branch".to_string(),
+                        base_sha: None,
+                    },
+                    success_path.clone(),
+                )
+            })
+        })
+        .await
+        .unwrap()
+        .unwrap();
+        cx.executor().run_until_parked();
+
+        // Verify the worktree directory exists before rollback.
+        assert!(
+            fs.is_dir(&success_path).await,
+            "worktree directory should exist before rollback"
+        );
+
+        let multi_workspace =
+            cx.add_window(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+
+        // Build creation_infos: one success, one failure.
+        let failed_path = PathBuf::from("/worktrees/branch/failed_project");
+
+        let (sender_ok, receiver_ok) = futures::channel::oneshot::channel::<Result<()>>();
+        let (sender_err, receiver_err) = futures::channel::oneshot::channel::<Result<()>>();
+        sender_ok.send(Ok(())).unwrap();
+        sender_err
+            .send(Err(anyhow!("branch already exists")))
+            .unwrap();
+
+        let creation_infos = vec![
+            (repository.clone(), success_path.clone(), receiver_ok),
+            (repository.clone(), failed_path.clone(), receiver_err),
+        ];
+
+        let fs_clone = fs.clone();
+        let result = multi_workspace
+            .update(cx, |_, window, cx| {
+                window.spawn(cx, async move |cx| {
+                    AgentPanel::await_and_rollback_on_failure(creation_infos, fs_clone, cx).await
+                })
+            })
+            .unwrap()
+            .await;
+
+        assert!(
+            result.is_err(),
+            "should return error when any creation fails"
+        );
+        let err_msg = result.unwrap_err().to_string();
+        assert!(
+            err_msg.contains("branch already exists"),
+            "error should mention the original failure: {err_msg}"
+        );
+
+        // The successful worktree should have been rolled back by git.
+        cx.executor().run_until_parked();
+        assert!(
+            !fs.is_dir(&success_path).await,
+            "successful worktree directory should be removed by rollback"
+        );
+    }
+
+    #[gpui::test]
+    async fn test_rollback_on_canceled_receiver(cx: &mut TestAppContext) {
+        init_test(cx);
+        let fs = FakeFs::new(cx.executor());
+        cx.update(|cx| {
+            cx.update_flags(true, vec!["agent-v2".to_string()]);
+            agent::ThreadStore::init_global(cx);
+            language_model::LanguageModelRegistry::test(cx);
+            <dyn fs::Fs>::set_global(fs.clone(), cx);
+        });
+
+        fs.insert_tree(
+            "/project",
+            json!({
+                ".git": {},
+                "src": { "main.rs": "fn main() {}" }
+            }),
+        )
+        .await;
+
+        let project = Project::test(fs.clone(), [Path::new("/project")], cx).await;
+        cx.executor().run_until_parked();
+
+        let repository = project.read_with(cx, |project, cx| {
+            project.repositories(cx).values().next().unwrap().clone()
+        });
+
+        let multi_workspace =
+            cx.add_window(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+
+        let path = PathBuf::from("/worktrees/branch/project");
+
+        // Drop the sender to simulate a canceled receiver.
+        let (_sender, receiver) = futures::channel::oneshot::channel::<Result<()>>();
+        drop(_sender);
+
+        let creation_infos = vec![(repository.clone(), path.clone(), receiver)];
+
+        let fs_clone = fs.clone();
+        let result = multi_workspace
+            .update(cx, |_, window, cx| {
+                window.spawn(cx, async move |cx| {
+                    AgentPanel::await_and_rollback_on_failure(creation_infos, fs_clone, cx).await
+                })
+            })
+            .unwrap()
+            .await;
+
+        assert!(
+            result.is_err(),
+            "should return error when receiver is canceled"
+        );
+        let err_msg = result.unwrap_err().to_string();
+        assert!(
+            err_msg.contains("canceled"),
+            "error should mention cancellation: {err_msg}"
+        );
+    }
+
+    #[gpui::test]
+    async fn test_rollback_cleans_up_orphan_directories(cx: &mut TestAppContext) {
+        init_test(cx);
+        let fs = FakeFs::new(cx.executor());
+        cx.update(|cx| {
+            cx.update_flags(true, vec!["agent-v2".to_string()]);
+            agent::ThreadStore::init_global(cx);
+            language_model::LanguageModelRegistry::test(cx);
+            <dyn fs::Fs>::set_global(fs.clone(), cx);
+        });
+
+        fs.insert_tree(
+            "/project",
+            json!({
+                ".git": {},
+                "src": { "main.rs": "fn main() {}" }
+            }),
+        )
+        .await;
+
+        let project = Project::test(fs.clone(), [Path::new("/project")], cx).await;
+        cx.executor().run_until_parked();
+
+        let repository = project.read_with(cx, |project, cx| {
+            project.repositories(cx).values().next().unwrap().clone()
+        });
+
+        let multi_workspace =
+            cx.add_window(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+
+        // Simulate the orphan state: create_dir_all was called but git
+        // worktree add failed, leaving a directory with leftover files.
+        let orphan_path = PathBuf::from("/worktrees/branch/orphan_project");
+        fs.insert_tree(
+            "/worktrees/branch/orphan_project",
+            json!({ "leftover.txt": "junk" }),
+        )
+        .await;
+
+        assert!(
+            fs.is_dir(&orphan_path).await,
+            "orphan dir should exist before rollback"
+        );
+
+        let (sender, receiver) = futures::channel::oneshot::channel::<Result<()>>();
+        sender.send(Err(anyhow!("hook failed"))).unwrap();
+
+        let creation_infos = vec![(repository.clone(), orphan_path.clone(), receiver)];
+
+        let fs_clone = fs.clone();
+        let result = multi_workspace
+            .update(cx, |_, window, cx| {
+                window.spawn(cx, async move |cx| {
+                    AgentPanel::await_and_rollback_on_failure(creation_infos, fs_clone, cx).await
+                })
+            })
+            .unwrap()
+            .await;
+
+        cx.executor().run_until_parked();
+
+        assert!(result.is_err());
+        assert!(
+            !fs.is_dir(&orphan_path).await,
+            "orphan worktree directory should be removed by filesystem cleanup"
+        );
+    }
+
+    #[gpui::test]
+    async fn test_worktree_creation_for_remote_project(
+        cx: &mut TestAppContext,
+        server_cx: &mut TestAppContext,
+    ) {
+        init_test(cx);
+
+        let app_state = cx.update(|cx| {
+            agent::ThreadStore::init_global(cx);
+            language_model::LanguageModelRegistry::test(cx);
+
+            let app_state = workspace::AppState::test(cx);
+            workspace::init(app_state.clone(), cx);
+            app_state
+        });
+
+        server_cx.update(|cx| {
+            release_channel::init(semver::Version::new(0, 0, 0), cx);
+        });
+
+        // Set up the remote server side with a git repo.
+        let server_fs = FakeFs::new(server_cx.executor());
+        server_fs
+            .insert_tree(
+                "/project",
+                json!({
+                    ".git": {},
+                    "src": {
+                        "main.rs": "fn main() {}"
+                    }
+                }),
+            )
+            .await;
+        server_fs.set_branch_name(Path::new("/project/.git"), Some("main"));
+
+        // Create a mock remote connection.
+        let (opts, server_session, _) = remote::RemoteClient::fake_server(cx, server_cx);
+
+        server_cx.update(remote_server::HeadlessProject::init);
+        let server_executor = server_cx.executor();
+        let _headless = server_cx.new(|cx| {
+            remote_server::HeadlessProject::new(
+                remote_server::HeadlessAppState {
+                    session: server_session,
+                    fs: server_fs.clone(),
+                    http_client: Arc::new(http_client::BlockedHttpClient),
+                    node_runtime: node_runtime::NodeRuntime::unavailable(),
+                    languages: Arc::new(language::LanguageRegistry::new(server_executor.clone())),
+                    extension_host_proxy: Arc::new(extension::ExtensionHostProxy::new()),
+                    startup_time: Instant::now(),
+                },
+                false,
+                cx,
+            )
+        });
+
+        // Connect the client side and build a remote project.
+        // Use a separate Client to avoid double-registering proto handlers
+        // (Workspace::test_new creates its own WorkspaceStore from the
+        // project's client).
+        let remote_client = remote::RemoteClient::connect_mock(opts, cx).await;
+        let project = cx.update(|cx| {
+            let project_client = client::Client::new(
+                Arc::new(clock::FakeSystemClock::new()),
+                http_client::FakeHttpClient::with_404_response(),
+                cx,
+            );
+            let user_store = cx.new(|cx| client::UserStore::new(project_client.clone(), cx));
+            project::Project::remote(
+                remote_client,
+                project_client,
+                node_runtime::NodeRuntime::unavailable(),
+                user_store,
+                app_state.languages.clone(),
+                app_state.fs.clone(),
+                false,
+                cx,
+            )
+        });
+
+        // Open the remote path as a worktree in the project.
+        let worktree_path = Path::new("/project");
+        project
+            .update(cx, |project, cx| {
+                project.find_or_create_worktree(worktree_path, true, cx)
+            })
+            .await
+            .expect("should be able to open remote worktree");
+        cx.run_until_parked();
+
+        // Verify the project is indeed remote.
+        project.read_with(cx, |project, cx| {
+            assert!(!project.is_local(), "project should be remote, not local");
+            assert!(
+                project.remote_connection_options(cx).is_some(),
+                "project should have remote connection options"
+            );
+        });
+
+        // Create the workspace and agent panel.
+        let multi_workspace =
+            cx.add_window(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+        multi_workspace
+            .update(cx, |multi_workspace, _, cx| {
+                multi_workspace.open_sidebar(cx);
+            })
+            .unwrap();
+
+        let workspace = multi_workspace
+            .read_with(cx, |mw, _cx| mw.workspace().clone())
+            .unwrap();
+
+        workspace.update(cx, |workspace, _cx| {
+            workspace.set_random_database_id();
+        });
+
+        // Register a callback so new workspaces also get an AgentPanel.
+        cx.update(|cx| {
+            cx.observe_new(
+                |workspace: &mut Workspace,
+                 window: Option<&mut Window>,
+                 cx: &mut Context<Workspace>| {
+                    if let Some(window) = window {
+                        let panel = cx.new(|cx| AgentPanel::new(workspace, None, window, cx));
+                        workspace.add_panel(panel, window, cx);
+                    }
+                },
+            )
+            .detach();
+        });
+
+        let cx = &mut VisualTestContext::from_window(multi_workspace.into(), cx);
+        cx.run_until_parked();
+
+        let panel = workspace.update_in(cx, |workspace, window, cx| {
+            let panel = cx.new(|cx| AgentPanel::new(workspace, None, window, cx));
+            workspace.add_panel(panel.clone(), window, cx);
+            panel
+        });
+
+        cx.run_until_parked();
+
+        // Open a thread.
+        panel.update_in(cx, |panel, window, cx| {
+            panel.open_external_thread_with_server(
+                Rc::new(StubAgentServer::default_response()),
+                window,
+                cx,
+            );
+        });
+        cx.run_until_parked();
+
+        // Set start_thread_in to LinkedWorktree to bypass git worktree
+        // creation and directly test workspace opening for a known path.
+        let linked_path = PathBuf::from("/project");
+        panel.update_in(cx, |panel, window, cx| {
+            panel.set_start_thread_in(
+                &StartThreadIn::LinkedWorktree {
+                    path: linked_path.clone(),
+                    display_name: "project".to_string(),
+                },
+                window,
+                cx,
+            );
+        });
+
+        // Trigger worktree creation.
+        let content = vec![acp::ContentBlock::Text(acp::TextContent::new(
+            "Hello from remote test",
+        ))];
+        panel.update_in(cx, |panel, window, cx| {
+            panel.handle_worktree_requested(
+                content,
+                WorktreeCreationArgs::Linked {
+                    worktree_path: linked_path,
+                },
+                window,
+                cx,
+            );
+        });
+
+        // The refactored code uses `find_or_create_workspace`, which
+        // finds the existing remote workspace (matching paths + host)
+        // and reuses it instead of creating a new connection.
+        cx.run_until_parked();
+
+        // The task should have completed: the existing workspace was
+        // found and reused.
+        panel.read_with(cx, |panel, _cx| {
+            assert!(
+                panel.worktree_creation_status.is_none(),
+                "worktree creation should have completed, but status is: {:?}",
+                panel.worktree_creation_status
+            );
+        });
+
+        // The existing remote workspace was reused — no new workspace
+        // should have been created.
+        multi_workspace
+            .read_with(cx, |multi_workspace, cx| {
+                let project = workspace.read(cx).project().clone();
+                assert!(
+                    !project.read(cx).is_local(),
+                    "workspace project should still be remote, not local"
+                );
+                assert_eq!(
+                    multi_workspace.workspaces().count(),
+                    1,
+                    "existing remote workspace should be reused, not a new one created"
+                );
+            })
+            .unwrap();
+    }
 }

crates/agent_ui/src/agent_ui.rs 🔗

@@ -33,6 +33,7 @@ mod thread_history;
 mod thread_history_view;
 mod thread_import;
 pub mod thread_metadata_store;
+pub mod thread_worktree_archive;
 mod thread_worktree_picker;
 pub mod threads_archive_view;
 mod ui;

crates/agent_ui/src/conversation_view.rs 🔗

@@ -1,12 +1,15 @@
 use acp_thread::{
     AcpThread, AcpThreadEvent, AgentSessionInfo, AgentThreadEntry, AssistantMessage,
-    AssistantMessageChunk, AuthRequired, LoadError, MentionUri, PermissionOptionChoice,
-    PermissionOptions, PermissionPattern, RetryStatus, SelectedPermissionOutcome, ThreadStatus,
-    ToolCall, ToolCallContent, ToolCallStatus, UserMessageId,
+    AssistantMessageChunk, AuthRequired, LoadError, MaxOutputTokensError, MentionUri,
+    PermissionOptionChoice, PermissionOptions, PermissionPattern, RetryStatus,
+    SelectedPermissionOutcome, ThreadStatus, ToolCall, ToolCallContent, ToolCallStatus,
+    UserMessageId,
 };
 use acp_thread::{AgentConnection, Plan};
 use action_log::{ActionLog, ActionLogTelemetry, DiffStats};
-use agent::{NativeAgentServer, NativeAgentSessionList, SharedThread, ThreadStore};
+use agent::{
+    NativeAgentServer, NativeAgentSessionList, NoModelConfiguredError, SharedThread, ThreadStore,
+};
 use agent_client_protocol as acp;
 #[cfg(test)]
 use agent_servers::AgentServerDelegate;
@@ -34,7 +37,7 @@ use gpui::{
     list, point, pulsating_between,
 };
 use language::Buffer;
-use language_model::LanguageModelRegistry;
+use language_model::{LanguageModelCompletionError, LanguageModelRegistry};
 use markdown::{Markdown, MarkdownElement, MarkdownFont, MarkdownStyle};
 use parking_lot::RwLock;
 use project::{AgentId, AgentServerStore, Project, ProjectEntryId};
@@ -78,7 +81,7 @@ use crate::agent_diff::AgentDiff;
 use crate::entry_view_state::{EntryViewEvent, ViewEvent};
 use crate::message_editor::{MessageEditor, MessageEditorEvent};
 use crate::profile_selector::{ProfileProvider, ProfileSelector};
-use crate::thread_metadata_store::ThreadMetadataStore;
+
 use crate::ui::{AgentNotification, AgentNotificationEvent};
 use crate::{
     Agent, AgentDiffPane, AgentInitialContent, AgentPanel, AllowAlways, AllowOnce,
@@ -113,6 +116,31 @@ pub(crate) enum ThreadError {
     PaymentRequired,
     Refusal,
     AuthenticationRequired(SharedString),
+    RateLimitExceeded {
+        provider: SharedString,
+    },
+    ServerOverloaded {
+        provider: SharedString,
+    },
+    PromptTooLarge,
+    NoApiKey {
+        provider: SharedString,
+    },
+    StreamError {
+        provider: SharedString,
+    },
+    InvalidApiKey {
+        provider: SharedString,
+    },
+    PermissionDenied {
+        provider: SharedString,
+    },
+    RequestFailed,
+    MaxOutputTokens,
+    NoModelSelected,
+    ApiError {
+        provider: SharedString,
+    },
     Other {
         message: SharedString,
         acp_error_code: Option<SharedString>,
@@ -121,12 +149,57 @@ pub(crate) enum ThreadError {
 
 impl From<anyhow::Error> for ThreadError {
     fn from(error: anyhow::Error) -> Self {
-        if error.is::<language_model::PaymentRequiredError>() {
+        if error.is::<MaxOutputTokensError>() {
+            Self::MaxOutputTokens
+        } else if error.is::<NoModelConfiguredError>() {
+            Self::NoModelSelected
+        } else if error.is::<language_model::PaymentRequiredError>() {
             Self::PaymentRequired
         } else if let Some(acp_error) = error.downcast_ref::<acp::Error>()
             && acp_error.code == acp::ErrorCode::AuthRequired
         {
             Self::AuthenticationRequired(acp_error.message.clone().into())
+        } else if let Some(lm_error) = error.downcast_ref::<LanguageModelCompletionError>() {
+            use LanguageModelCompletionError::*;
+            match lm_error {
+                RateLimitExceeded { provider, .. } => Self::RateLimitExceeded {
+                    provider: provider.to_string().into(),
+                },
+                ServerOverloaded { provider, .. } | ApiInternalServerError { provider, .. } => {
+                    Self::ServerOverloaded {
+                        provider: provider.to_string().into(),
+                    }
+                }
+                PromptTooLarge { .. } => Self::PromptTooLarge,
+                NoApiKey { provider } => Self::NoApiKey {
+                    provider: provider.to_string().into(),
+                },
+                StreamEndedUnexpectedly { provider }
+                | ApiReadResponseError { provider, .. }
+                | DeserializeResponse { provider, .. }
+                | HttpSend { provider, .. } => Self::StreamError {
+                    provider: provider.to_string().into(),
+                },
+                AuthenticationError { provider, .. } => Self::InvalidApiKey {
+                    provider: provider.to_string().into(),
+                },
+                PermissionError { provider, .. } => Self::PermissionDenied {
+                    provider: provider.to_string().into(),
+                },
+                UpstreamProviderError { .. } => Self::RequestFailed,
+                BadRequestFormat { provider, .. }
+                | HttpResponseError { provider, .. }
+                | ApiEndpointNotFound { provider } => Self::ApiError {
+                    provider: provider.to_string().into(),
+                },
+                _ => {
+                    let message: SharedString = format!("{:#}", error).into();
+                    Self::Other {
+                        message,
+                        acp_error_code: None,
+                    }
+                }
+            }
         } else {
             let message: SharedString = format!("{:#}", error).into();
 
@@ -354,6 +427,20 @@ impl ConversationView {
             .pending_tool_call(id, cx)
     }
 
+    pub fn root_thread_has_pending_tool_call(&self, cx: &App) -> bool {
+        let Some(root_thread) = self.root_thread(cx) else {
+            return false;
+        };
+        let root_id = root_thread.read(cx).id.clone();
+        self.as_connected().is_some_and(|connected| {
+            connected
+                .conversation
+                .read(cx)
+                .pending_tool_call(&root_id, cx)
+                .is_some()
+        })
+    }
+
     pub fn root_thread(&self, cx: &App) -> Option<Entity<ThreadView>> {
         match &self.server_state {
             ServerState::Connected(connected) => {
@@ -1510,24 +1597,30 @@ impl ConversationView {
 
         let agent_telemetry_id = connection.telemetry_id();
 
-        if let Some(login) = connection.terminal_auth_task(&method, cx) {
+        if let Some(login_task) = connection.terminal_auth_task(&method, cx) {
             configuration_view.take();
             pending_auth_method.replace(method.clone());
 
             let project = self.project.clone();
-            let authenticate = Self::spawn_external_agent_login(
-                login,
-                workspace,
-                project,
-                method.clone(),
-                false,
-                window,
-                cx,
-            );
             cx.notify();
             self.auth_task = Some(cx.spawn_in(window, {
                 async move |this, cx| {
-                    let result = authenticate.await;
+                    let result = async {
+                        let login = login_task.await?;
+                        this.update_in(cx, |_this, window, cx| {
+                            Self::spawn_external_agent_login(
+                                login,
+                                workspace,
+                                project,
+                                method.clone(),
+                                false,
+                                window,
+                                cx,
+                            )
+                        })?
+                        .await
+                    }
+                    .await;
 
                     match &result {
                         Ok(_) => telemetry::event!(
@@ -2628,22 +2721,6 @@ impl ConversationView {
     pub fn history(&self) -> Option<&Entity<ThreadHistory>> {
         self.as_connected().and_then(|c| c.history.as_ref())
     }
-
-    pub fn delete_history_entry(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
-        let Some(connected) = self.as_connected() else {
-            return;
-        };
-
-        let Some(history) = &connected.history else {
-            return;
-        };
-        let task = history.update(cx, |history, cx| history.delete_session(&session_id, cx));
-        task.detach_and_log_err(cx);
-
-        if let Some(store) = ThreadMetadataStore::try_global(cx) {
-            store.update(cx, |store, cx| store.delete(session_id.clone(), cx));
-        }
-    }
 }
 
 fn loading_contents_spinner(size: IconSize) -> AnyElement {
@@ -2813,6 +2890,7 @@ pub(crate) mod tests {
     use workspace::{Item, MultiWorkspace};
 
     use crate::agent_panel;
+    use crate::thread_metadata_store::ThreadMetadataStore;
 
     use super::*;
 
@@ -6620,19 +6698,11 @@ pub(crate) mod tests {
         conversation_view.read_with(cx, |conversation_view, cx| {
             let state = conversation_view.active_thread().unwrap();
             let error = &state.read(cx).thread_error;
-            match error {
-                Some(ThreadError::Other { message, .. }) => {
-                    assert!(
-                        message.contains("Maximum tokens reached"),
-                        "Expected 'Maximum tokens reached' error, got: {}",
-                        message
-                    );
-                }
-                other => panic!(
-                    "Expected ThreadError::Other with 'Maximum tokens reached', got: {:?}",
-                    other.is_some()
-                ),
-            }
+            assert!(
+                matches!(error, Some(ThreadError::MaxOutputTokens)),
+                "Expected ThreadError::MaxOutputTokens, got: {:?}",
+                error.is_some()
+            );
         });
     }
 

crates/agent_ui/src/conversation_view/thread_view.rs 🔗

@@ -330,6 +330,7 @@ pub struct ThreadView {
     pub hovered_recent_history_item: Option<usize>,
     pub show_external_source_prompt_warning: bool,
     pub show_codex_windows_warning: bool,
+    pub multi_root_callout_dismissed: bool,
     pub generating_indicator_in_list: bool,
     pub history: Option<Entity<ThreadHistory>>,
     pub _history_subscription: Option<Subscription>,
@@ -573,6 +574,7 @@ impl ThreadView {
             history,
             _history_subscription: history_subscription,
             show_codex_windows_warning,
+            multi_root_callout_dismissed: false,
             generating_indicator_in_list: false,
         };
 
@@ -1259,6 +1261,62 @@ impl ThreadView {
                 ThreadError::AuthenticationRequired(message) => {
                     ("authentication_required", None, message.clone())
                 }
+                ThreadError::RateLimitExceeded { provider } => (
+                    "rate_limit_exceeded",
+                    None,
+                    format!("{provider}'s rate limit was reached.").into(),
+                ),
+                ThreadError::ServerOverloaded { provider } => (
+                    "server_overloaded",
+                    None,
+                    format!("{provider}'s servers are temporarily unavailable.").into(),
+                ),
+                ThreadError::PromptTooLarge => (
+                    "prompt_too_large",
+                    None,
+                    "Context too large for the model's context window.".into(),
+                ),
+                ThreadError::NoApiKey { provider } => (
+                    "no_api_key",
+                    None,
+                    format!("No API key configured for {provider}.").into(),
+                ),
+                ThreadError::StreamError { provider } => (
+                    "stream_error",
+                    None,
+                    format!("Connection to {provider}'s API was interrupted.").into(),
+                ),
+                ThreadError::InvalidApiKey { provider } => (
+                    "invalid_api_key",
+                    None,
+                    format!("Invalid or expired API key for {provider}.").into(),
+                ),
+                ThreadError::PermissionDenied { provider } => (
+                    "permission_denied",
+                    None,
+                    format!(
+                        "{provider}'s API rejected the request due to insufficient permissions."
+                    )
+                    .into(),
+                ),
+                ThreadError::RequestFailed => (
+                    "request_failed",
+                    None,
+                    "Request could not be completed after multiple attempts.".into(),
+                ),
+                ThreadError::MaxOutputTokens => (
+                    "max_output_tokens",
+                    None,
+                    "Model reached its maximum output length.".into(),
+                ),
+                ThreadError::NoModelSelected => {
+                    ("no_model_selected", None, "No model selected.".into())
+                }
+                ThreadError::ApiError { provider } => (
+                    "api_error",
+                    None,
+                    format!("{provider}'s API returned an unexpected error.").into(),
+                ),
                 ThreadError::Other {
                     acp_error_code,
                     message,
@@ -4331,17 +4389,27 @@ impl Render for TokenUsageTooltip {
 
 impl ThreadView {
     fn render_entries(&mut self, cx: &mut Context<Self>) -> List {
+        let max_content_width = AgentSettings::get_global(cx).max_content_width;
+        let centered_container = move |content: AnyElement| {
+            h_flex()
+                .w_full()
+                .justify_center()
+                .child(div().max_w(max_content_width).w_full().child(content))
+        };
+
         list(
             self.list_state.clone(),
             cx.processor(move |this, index: usize, window, cx| {
                 let entries = this.thread.read(cx).entries();
                 if let Some(entry) = entries.get(index) {
-                    this.render_entry(index, entries.len(), entry, window, cx)
+                    let rendered = this.render_entry(index, entries.len(), entry, window, cx);
+                    centered_container(rendered.into_any_element()).into_any_element()
                 } else if this.generating_indicator_in_list {
                     let confirmation = entries
                         .last()
                         .is_some_and(|entry| Self::is_waiting_for_confirmation(entry));
-                    this.render_generating(confirmation, cx).into_any_element()
+                    let rendered = this.render_generating(confirmation, cx);
+                    centered_container(rendered.into_any_element()).into_any_element()
                 } else {
                     Empty.into_any()
                 }
@@ -8076,6 +8144,109 @@ impl ThreadView {
                 self.render_authentication_required_error(error.clone(), cx)
             }
             ThreadError::PaymentRequired => self.render_payment_required_error(cx),
+            ThreadError::RateLimitExceeded { provider } => self.render_error_callout(
+                "Rate Limit Reached",
+                format!(
+                    "{provider}'s rate limit was reached. Zed will retry automatically. \
+                    You can also wait a moment and try again."
+                )
+                .into(),
+                true,
+                true,
+                cx,
+            ),
+            ThreadError::ServerOverloaded { provider } => self.render_error_callout(
+                "Provider Unavailable",
+                format!(
+                    "{provider}'s servers are temporarily unavailable. Zed will retry \
+                    automatically. If the problem persists, check the provider's status page."
+                )
+                .into(),
+                true,
+                true,
+                cx,
+            ),
+            ThreadError::PromptTooLarge => self.render_prompt_too_large_error(cx),
+            ThreadError::NoApiKey { provider } => self.render_error_callout(
+                "API Key Missing",
+                format!(
+                    "No API key is configured for {provider}. \
+                    Add your key via the Agent Panel settings to continue."
+                )
+                .into(),
+                false,
+                true,
+                cx,
+            ),
+            ThreadError::StreamError { provider } => self.render_error_callout(
+                "Connection Interrupted",
+                format!(
+                    "The connection to {provider}'s API was interrupted. Zed will retry \
+                    automatically. If the problem persists, check your network connection."
+                )
+                .into(),
+                true,
+                true,
+                cx,
+            ),
+            ThreadError::InvalidApiKey { provider } => self.render_error_callout(
+                "Invalid API Key",
+                format!(
+                    "The API key for {provider} is invalid or has expired. \
+                    Update your key via the Agent Panel settings to continue."
+                )
+                .into(),
+                false,
+                false,
+                cx,
+            ),
+            ThreadError::PermissionDenied { provider } => self.render_error_callout(
+                "Permission Denied",
+                format!(
+                    "{provider}'s API rejected the request due to insufficient permissions. \
+                    Check that your API key has access to this model."
+                )
+                .into(),
+                false,
+                false,
+                cx,
+            ),
+            ThreadError::RequestFailed => self.render_error_callout(
+                "Request Failed",
+                "The request could not be completed after multiple attempts. \
+                Try again in a moment."
+                    .into(),
+                true,
+                false,
+                cx,
+            ),
+            ThreadError::MaxOutputTokens => self.render_error_callout(
+                "Output Limit Reached",
+                "The model stopped because it reached its maximum output length. \
+                You can ask it to continue where it left off."
+                    .into(),
+                false,
+                false,
+                cx,
+            ),
+            ThreadError::NoModelSelected => self.render_error_callout(
+                "No Model Selected",
+                "Select a model from the model picker below to get started.".into(),
+                false,
+                false,
+                cx,
+            ),
+            ThreadError::ApiError { provider } => self.render_error_callout(
+                "API Error",
+                format!(
+                    "{provider}'s API returned an unexpected error. \
+                    If the problem persists, try switching models or restarting Zed."
+                )
+                .into(),
+                true,
+                true,
+                cx,
+            ),
         };
 
         Some(div().child(content))
@@ -8136,6 +8307,72 @@ impl ThreadView {
             .dismiss_action(self.dismiss_error_button(cx))
     }
 
+    fn render_error_callout(
+        &self,
+        title: &'static str,
+        message: SharedString,
+        show_retry: bool,
+        show_copy: bool,
+        cx: &mut Context<Self>,
+    ) -> Callout {
+        let can_resume = show_retry && self.thread.read(cx).can_retry(cx);
+        let show_actions = can_resume || show_copy;
+
+        Callout::new()
+            .severity(Severity::Error)
+            .icon(IconName::XCircle)
+            .title(title)
+            .description(message.clone())
+            .when(show_actions, |callout| {
+                callout.actions_slot(
+                    h_flex()
+                        .gap_0p5()
+                        .when(can_resume, |this| this.child(self.retry_button(cx)))
+                        .when(show_copy, |this| {
+                            this.child(self.create_copy_button(message.clone()))
+                        }),
+                )
+            })
+            .dismiss_action(self.dismiss_error_button(cx))
+    }
+
+    fn render_prompt_too_large_error(&self, cx: &mut Context<Self>) -> Callout {
+        const MESSAGE: &str = "This conversation is too long for the model's context window. \
+            Start a new thread or remove some attached files to continue.";
+
+        Callout::new()
+            .severity(Severity::Error)
+            .icon(IconName::XCircle)
+            .title("Context Too Large")
+            .description(MESSAGE)
+            .actions_slot(
+                h_flex()
+                    .gap_0p5()
+                    .child(self.new_thread_button(cx))
+                    .child(self.create_copy_button(MESSAGE)),
+            )
+            .dismiss_action(self.dismiss_error_button(cx))
+    }
+
+    fn retry_button(&self, cx: &mut Context<Self>) -> impl IntoElement {
+        Button::new("retry", "Retry")
+            .label_size(LabelSize::Small)
+            .style(ButtonStyle::Filled)
+            .on_click(cx.listener(|this, _, _, cx| {
+                this.retry_generation(cx);
+            }))
+    }
+
+    fn new_thread_button(&self, cx: &mut Context<Self>) -> impl IntoElement {
+        Button::new("new_thread", "New Thread")
+            .label_size(LabelSize::Small)
+            .style(ButtonStyle::Filled)
+            .on_click(cx.listener(|this, _, window, cx| {
+                this.clear_thread_error(cx);
+                window.dispatch_action(NewThread.boxed_clone(), cx);
+            }))
+    }
+
     fn upgrade_button(&self, cx: &mut Context<Self>) -> impl IntoElement {
         Button::new("upgrade", "Upgrade")
             .label_size(LabelSize::Small)
@@ -8338,6 +8575,53 @@ impl ThreadView {
             )
     }
 
+    fn render_multi_root_callout(&self, cx: &mut Context<Self>) -> Option<Callout> {
+        if self.multi_root_callout_dismissed {
+            return None;
+        }
+
+        if self.as_native_connection(cx).is_some() {
+            return None;
+        }
+
+        let project = self.project.upgrade()?;
+        let worktree_count = project.read(cx).visible_worktrees(cx).count();
+        if worktree_count <= 1 {
+            return None;
+        }
+
+        let work_dirs = self.thread.read(cx).work_dirs()?;
+        let active_dir = work_dirs
+            .ordered_paths()
+            .next()
+            .and_then(|p| p.file_name())
+            .map(|name| name.to_string_lossy().to_string())
+            .unwrap_or_else(|| "one folder".to_string());
+
+        let description = format!(
+            "This agent only operates on \"{}\". Other folders in this workspace are not accessible to it.",
+            active_dir
+        );
+
+        Some(
+            Callout::new()
+                .severity(Severity::Warning)
+                .icon(IconName::Warning)
+                .title("External Agents currently don't support multi-root workspaces")
+                .description(description)
+                .border_position(ui::BorderPosition::Bottom)
+                .dismiss_action(
+                    IconButton::new("dismiss-multi-root-callout", IconName::Close)
+                        .icon_size(IconSize::Small)
+                        .tooltip(Tooltip::text("Dismiss"))
+                        .on_click(cx.listener(|this, _, _, cx| {
+                            this.multi_root_callout_dismissed = true;
+                            cx.notify();
+                        })),
+                ),
+        )
+    }
+
     fn render_new_version_callout(&self, version: &SharedString, cx: &mut Context<Self>) -> Div {
         let server_view = self.server_view.clone();
         let has_version = !version.is_empty();
@@ -8467,13 +8751,20 @@ impl ThreadView {
             return;
         };
         thread.update(cx, |thread, cx| {
-            thread.set_speed(
-                thread
-                    .speed()
-                    .map(|speed| speed.toggle())
-                    .unwrap_or(Speed::Fast),
-                cx,
-            );
+            let new_speed = thread
+                .speed()
+                .map(|speed| speed.toggle())
+                .unwrap_or(Speed::Fast);
+            thread.set_speed(new_speed, cx);
+
+            let fs = thread.project().read(cx).fs().clone();
+            update_settings_file(fs, cx, move |settings, _| {
+                if let Some(agent) = settings.agent.as_mut()
+                    && let Some(default_model) = agent.default_model.as_mut()
+                {
+                    default_model.speed = Some(new_speed);
+                }
+            });
         });
     }
 
@@ -8539,7 +8830,6 @@ impl ThreadView {
 impl Render for ThreadView {
     fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
         let has_messages = self.list_state.item_count() > 0;
-        let max_content_width = AgentSettings::get_global(cx).max_content_width;
         let list_state = self.list_state.clone();
 
         let conversation = v_flex()
@@ -8550,13 +8840,7 @@ impl Render for ThreadView {
                 if has_messages {
                     this.flex_1()
                         .size_full()
-                        .child(
-                            v_flex()
-                                .mx_auto()
-                                .max_w(max_content_width)
-                                .size_full()
-                                .child(self.render_entries(cx)),
-                        )
+                        .child(self.render_entries(cx))
                         .vertical_scrollbar_for(&list_state, window, cx)
                         .into_any()
                 } else {
@@ -8741,6 +9025,7 @@ impl Render for ThreadView {
             .size_full()
             .children(self.render_subagent_titlebar(cx))
             .child(conversation)
+            .children(self.render_multi_root_callout(cx))
             .children(self.render_activity_bar(window, cx))
             .when(self.show_external_source_prompt_warning, |this| {
                 this.child(self.render_external_source_prompt_warning(cx))

crates/agent_ui/src/favorite_models.rs 🔗

@@ -11,6 +11,7 @@ fn language_model_to_selection(model: &Arc<dyn LanguageModel>) -> LanguageModelS
         model: model.id().0.to_string(),
         enable_thinking: false,
         effort: None,
+        speed: None,
     }
 }
 

crates/agent_ui/src/inline_assistant.rs 🔗

@@ -1,10 +1,8 @@
 use language_models::provider::anthropic::telemetry::{
     AnthropicCompletionType, AnthropicEventData, AnthropicEventType, report_anthropic_event,
 };
-use std::cmp;
 use std::mem;
 use std::ops::Range;
-use std::rc::Rc;
 use std::sync::Arc;
 use uuid::Uuid;
 
@@ -27,8 +25,8 @@ use editor::RowExt;
 use editor::SelectionEffects;
 use editor::scroll::ScrollOffset;
 use editor::{
-    Anchor, AnchorRangeExt, CodeActionProvider, Editor, EditorEvent, HighlightKey, MultiBuffer,
-    MultiBufferSnapshot, ToOffset as _, ToPoint,
+    Anchor, AnchorRangeExt, Editor, EditorEvent, HighlightKey, MultiBuffer, MultiBufferSnapshot,
+    ToOffset as _, ToPoint,
     actions::SelectAll,
     display_map::{
         BlockContext, BlockPlacement, BlockProperties, BlockStyle, CustomBlockId, EditorMargins,
@@ -45,15 +43,14 @@ use language::{Buffer, Point, Selection, TransactionId};
 use language_model::{ConfigurationError, ConfiguredModel, LanguageModelRegistry};
 use multi_buffer::MultiBufferRow;
 use parking_lot::Mutex;
-use project::{CodeAction, DisableAiSettings, LspAction, Project, ProjectTransaction};
+use project::{DisableAiSettings, Project};
 use prompt_store::{PromptBuilder, PromptStore};
 use settings::{Settings, SettingsStore};
 
 use terminal_view::{TerminalView, terminal_panel::TerminalPanel};
-use text::{OffsetRangeExt, ToPoint as _};
 use ui::prelude::*;
 use util::{RangeExt, ResultExt, maybe};
-use workspace::{ItemHandle, Toast, Workspace, dock::Panel, notifications::NotificationId};
+use workspace::{Toast, Workspace, dock::Panel, notifications::NotificationId};
 use zed_actions::agent::OpenSettings;
 
 pub fn init(fs: Arc<dyn Fs>, prompt_builder: Arc<PromptBuilder>, cx: &mut App) {
@@ -184,7 +181,7 @@ impl InlineAssistant {
 
     fn handle_workspace_event(
         &mut self,
-        workspace: Entity<Workspace>,
+        _workspace: Entity<Workspace>,
         event: &workspace::Event,
         window: &mut Window,
         cx: &mut App,
@@ -203,51 +200,10 @@ impl InlineAssistant {
                     }
                 }
             }
-            workspace::Event::ItemAdded { item } => {
-                self.register_workspace_item(&workspace, item.as_ref(), window, cx);
-            }
             _ => (),
         }
     }
 
-    fn register_workspace_item(
-        &mut self,
-        workspace: &Entity<Workspace>,
-        item: &dyn ItemHandle,
-        window: &mut Window,
-        cx: &mut App,
-    ) {
-        let is_ai_enabled = !DisableAiSettings::get_global(cx).disable_ai;
-
-        if let Some(editor) = item.act_as::<Editor>(cx) {
-            editor.update(cx, |editor, cx| {
-                if is_ai_enabled {
-                    editor.add_code_action_provider(
-                        Rc::new(AssistantCodeActionProvider {
-                            editor: cx.entity().downgrade(),
-                            workspace: workspace.downgrade(),
-                        }),
-                        window,
-                        cx,
-                    );
-
-                    if DisableAiSettings::get_global(cx).disable_ai {
-                        // Cancel any active edit predictions
-                        if editor.has_active_edit_prediction() {
-                            editor.cancel(&Default::default(), window, cx);
-                        }
-                    }
-                } else {
-                    editor.remove_code_action_provider(
-                        ASSISTANT_CODE_ACTION_PROVIDER_ID.into(),
-                        window,
-                        cx,
-                    );
-                }
-            });
-        }
-    }
-
     pub fn inline_assist(
         workspace: &mut Workspace,
         action: &zed_actions::assistant::InlineAssist,
@@ -1875,130 +1831,6 @@ struct InlineAssistDecorations {
     end_block_id: CustomBlockId,
 }
 
-struct AssistantCodeActionProvider {
-    editor: WeakEntity<Editor>,
-    workspace: WeakEntity<Workspace>,
-}
-
-const ASSISTANT_CODE_ACTION_PROVIDER_ID: &str = "assistant";
-
-impl CodeActionProvider for AssistantCodeActionProvider {
-    fn id(&self) -> Arc<str> {
-        ASSISTANT_CODE_ACTION_PROVIDER_ID.into()
-    }
-
-    fn code_actions(
-        &self,
-        buffer: &Entity<Buffer>,
-        range: Range<text::Anchor>,
-        _: &mut Window,
-        cx: &mut App,
-    ) -> Task<Result<Vec<CodeAction>>> {
-        if !AgentSettings::get_global(cx).enabled(cx) {
-            return Task::ready(Ok(Vec::new()));
-        }
-
-        let snapshot = buffer.read(cx).snapshot();
-        let mut range = range.to_point(&snapshot);
-
-        // Expand the range to line boundaries.
-        range.start.column = 0;
-        range.end.column = snapshot.line_len(range.end.row);
-
-        let mut has_diagnostics = false;
-        for diagnostic in snapshot.diagnostics_in_range::<_, Point>(range.clone(), false) {
-            range.start = cmp::min(range.start, diagnostic.range.start);
-            range.end = cmp::max(range.end, diagnostic.range.end);
-            has_diagnostics = true;
-        }
-        if has_diagnostics {
-            let symbols_containing_start = snapshot.symbols_containing(range.start, None);
-            if let Some(symbol) = symbols_containing_start.last() {
-                range.start = cmp::min(range.start, symbol.range.start.to_point(&snapshot));
-                range.end = cmp::max(range.end, symbol.range.end.to_point(&snapshot));
-            }
-            let symbols_containing_end = snapshot.symbols_containing(range.end, None);
-            if let Some(symbol) = symbols_containing_end.last() {
-                range.start = cmp::min(range.start, symbol.range.start.to_point(&snapshot));
-                range.end = cmp::max(range.end, symbol.range.end.to_point(&snapshot));
-            }
-
-            Task::ready(Ok(vec![CodeAction {
-                server_id: language::LanguageServerId(0),
-                range: snapshot.anchor_before(range.start)..snapshot.anchor_after(range.end),
-                lsp_action: LspAction::Action(Box::new(lsp::CodeAction {
-                    title: "Fix with Assistant".into(),
-                    ..Default::default()
-                })),
-                resolved: true,
-            }]))
-        } else {
-            Task::ready(Ok(Vec::new()))
-        }
-    }
-
-    fn apply_code_action(
-        &self,
-        _buffer: Entity<Buffer>,
-        action: CodeAction,
-        _push_to_history: bool,
-        window: &mut Window,
-        cx: &mut App,
-    ) -> Task<Result<ProjectTransaction>> {
-        let editor = self.editor.clone();
-        let workspace = self.workspace.clone();
-        let prompt_store = PromptStore::global(cx);
-        window.spawn(cx, async move |cx| {
-            let workspace = workspace.upgrade().context("workspace was released")?;
-            let (thread_store, history) = cx.update(|_window, cx| {
-                let panel = workspace
-                    .read(cx)
-                    .panel::<AgentPanel>(cx)
-                    .context("missing agent panel")?
-                    .read(cx);
-
-                let history = panel
-                    .connection_store()
-                    .read(cx)
-                    .entry(&crate::Agent::NativeAgent)
-                    .and_then(|e| e.read(cx).history())
-                    .map(|h| h.downgrade());
-
-                anyhow::Ok((panel.thread_store().clone(), history))
-            })??;
-            let editor = editor.upgrade().context("editor was released")?;
-            let range = editor
-                .update(cx, |editor, cx| {
-                    editor.buffer().update(cx, |multibuffer, cx| {
-                        let multibuffer_snapshot = multibuffer.read(cx);
-                        multibuffer_snapshot.buffer_anchor_range_to_anchor_range(action.range)
-                    })
-                })
-                .context("invalid range")?;
-
-            let prompt_store = prompt_store.await.ok();
-            cx.update_global(|assistant: &mut InlineAssistant, window, cx| {
-                let assist_id = assistant.suggest_assist(
-                    &editor,
-                    range,
-                    "Fix Diagnostics".into(),
-                    None,
-                    true,
-                    workspace,
-                    thread_store,
-                    prompt_store,
-                    history,
-                    window,
-                    cx,
-                );
-                assistant.start_assist(assist_id, window, cx);
-            })?;
-
-            Ok(ProjectTransaction::default())
-        })
-    }
-}
-
 fn merge_ranges(ranges: &mut Vec<Range<Anchor>>, buffer: &MultiBufferSnapshot) {
     ranges.sort_unstable_by(|a, b| {
         a.start

crates/agent_ui/src/thread_import.rs 🔗

@@ -12,17 +12,18 @@ use gpui::{
 };
 use notifications::status_toast::{StatusToast, ToastIcon};
 use project::{AgentId, AgentRegistryStore, AgentServerStore};
+use remote::RemoteConnectionOptions;
 use ui::{
     Checkbox, KeyBinding, ListItem, ListItemSpacing, Modal, ModalFooter, ModalHeader, Section,
     prelude::*,
 };
 use util::ResultExt;
-use workspace::{ModalView, MultiWorkspace, PathList, Workspace};
+use workspace::{ModalView, MultiWorkspace, Workspace};
 
 use crate::{
     Agent, AgentPanel,
     agent_connection_store::AgentConnectionStore,
-    thread_metadata_store::{ThreadMetadata, ThreadMetadataStore},
+    thread_metadata_store::{ThreadMetadata, ThreadMetadataStore, ThreadWorktreePaths},
 };
 
 pub struct AcpThreadImportOnboarding;
@@ -436,19 +437,28 @@ fn find_threads_to_import(
     let mut wait_for_connection_tasks = Vec::new();
 
     for store in stores {
+        let remote_connection = store
+            .read(cx)
+            .project()
+            .read(cx)
+            .remote_connection_options(cx);
+
         for agent_id in agent_ids.clone() {
             let agent = Agent::from(agent_id.clone());
             let server = agent.server(<dyn Fs>::global(cx), ThreadStore::global(cx));
             let entry = store.update(cx, |store, cx| store.request_connection(agent, server, cx));
-            wait_for_connection_tasks
-                .push(entry.read(cx).wait_for_connection().map(|s| (agent_id, s)));
+
+            wait_for_connection_tasks.push(entry.read(cx).wait_for_connection().map({
+                let remote_connection = remote_connection.clone();
+                move |state| (agent_id, remote_connection, state)
+            }));
         }
     }
 
     let mut session_list_tasks = Vec::new();
     cx.spawn(async move |cx| {
         let results = futures::future::join_all(wait_for_connection_tasks).await;
-        for (agent, result) in results {
+        for (agent_id, remote_connection, result) in results {
             let Some(state) = result.log_err() else {
                 continue;
             };
@@ -457,18 +467,25 @@ fn find_threads_to_import(
             };
             let task = cx.update(|cx| {
                 list.list_sessions(AgentSessionListRequest::default(), cx)
-                    .map(|r| (agent, r))
+                    .map({
+                        let remote_connection = remote_connection.clone();
+                        move |response| (agent_id, remote_connection, response)
+                    })
             });
             session_list_tasks.push(task);
         }
 
         let mut sessions_by_agent = Vec::new();
         let results = futures::future::join_all(session_list_tasks).await;
-        for (agent_id, result) in results {
+        for (agent_id, remote_connection, result) in results {
             let Some(response) = result.log_err() else {
                 continue;
             };
-            sessions_by_agent.push((agent_id, response.sessions));
+            sessions_by_agent.push(SessionByAgent {
+                agent_id,
+                remote_connection,
+                sessions: response.sessions,
+            });
         }
 
         Ok(collect_importable_threads(
@@ -478,12 +495,23 @@ fn find_threads_to_import(
     })
 }
 
+struct SessionByAgent {
+    agent_id: AgentId,
+    remote_connection: Option<RemoteConnectionOptions>,
+    sessions: Vec<acp_thread::AgentSessionInfo>,
+}
+
 fn collect_importable_threads(
-    sessions_by_agent: Vec<(AgentId, Vec<acp_thread::AgentSessionInfo>)>,
+    sessions_by_agent: Vec<SessionByAgent>,
     mut existing_sessions: HashSet<acp::SessionId>,
 ) -> Vec<ThreadMetadata> {
     let mut to_insert = Vec::new();
-    for (agent_id, sessions) in sessions_by_agent {
+    for SessionByAgent {
+        agent_id,
+        remote_connection,
+        sessions,
+    } in sessions_by_agent
+    {
         for session in sessions {
             if !existing_sessions.insert(session.session_id.clone()) {
                 continue;
@@ -499,8 +527,8 @@ fn collect_importable_threads(
                     .unwrap_or_else(|| crate::DEFAULT_THREAD_TITLE.into()),
                 updated_at: session.updated_at.unwrap_or_else(|| Utc::now()),
                 created_at: session.created_at,
-                folder_paths,
-                main_worktree_paths: PathList::default(),
+                worktree_paths: ThreadWorktreePaths::from_folder_paths(&folder_paths),
+                remote_connection: remote_connection.clone(),
                 archived: true,
             });
         }
@@ -538,9 +566,10 @@ mod tests {
         let existing = HashSet::from_iter(vec![acp::SessionId::new("existing-1")]);
         let paths = PathList::new(&[Path::new("/project")]);
 
-        let sessions_by_agent = vec![(
-            AgentId::new("agent-a"),
-            vec![
+        let sessions_by_agent = vec![SessionByAgent {
+            agent_id: AgentId::new("agent-a"),
+            remote_connection: None,
+            sessions: vec![
                 make_session(
                     "existing-1",
                     Some("Already There"),
@@ -550,7 +579,7 @@ mod tests {
                 ),
                 make_session("new-1", Some("Brand New"), Some(paths), None, None),
             ],
-        )];
+        }];
 
         let result = collect_importable_threads(sessions_by_agent, existing);
 
@@ -564,13 +593,14 @@ mod tests {
         let existing = HashSet::default();
         let paths = PathList::new(&[Path::new("/project")]);
 
-        let sessions_by_agent = vec![(
-            AgentId::new("agent-a"),
-            vec![
+        let sessions_by_agent = vec![SessionByAgent {
+            agent_id: AgentId::new("agent-a"),
+            remote_connection: None,
+            sessions: vec![
                 make_session("has-dirs", Some("With Dirs"), Some(paths), None, None),
                 make_session("no-dirs", Some("No Dirs"), None, None, None),
             ],
-        )];
+        }];
 
         let result = collect_importable_threads(sessions_by_agent, existing);
 
@@ -583,13 +613,14 @@ mod tests {
         let existing = HashSet::default();
         let paths = PathList::new(&[Path::new("/project")]);
 
-        let sessions_by_agent = vec![(
-            AgentId::new("agent-a"),
-            vec![
+        let sessions_by_agent = vec![SessionByAgent {
+            agent_id: AgentId::new("agent-a"),
+            remote_connection: None,
+            sessions: vec![
                 make_session("s1", Some("Thread 1"), Some(paths.clone()), None, None),
                 make_session("s2", Some("Thread 2"), Some(paths), None, None),
             ],
-        )];
+        }];
 
         let result = collect_importable_threads(sessions_by_agent, existing);
 
@@ -603,20 +634,22 @@ mod tests {
         let paths = PathList::new(&[Path::new("/project")]);
 
         let sessions_by_agent = vec![
-            (
-                AgentId::new("agent-a"),
-                vec![make_session(
+            SessionByAgent {
+                agent_id: AgentId::new("agent-a"),
+                remote_connection: None,
+                sessions: vec![make_session(
                     "s1",
                     Some("From A"),
                     Some(paths.clone()),
                     None,
                     None,
                 )],
-            ),
-            (
-                AgentId::new("agent-b"),
-                vec![make_session("s2", Some("From B"), Some(paths), None, None)],
-            ),
+            },
+            SessionByAgent {
+                agent_id: AgentId::new("agent-b"),
+                remote_connection: None,
+                sessions: vec![make_session("s2", Some("From B"), Some(paths), None, None)],
+            },
         ];
 
         let result = collect_importable_threads(sessions_by_agent, existing);
@@ -640,26 +673,28 @@ mod tests {
         let paths = PathList::new(&[Path::new("/project")]);
 
         let sessions_by_agent = vec![
-            (
-                AgentId::new("agent-a"),
-                vec![make_session(
+            SessionByAgent {
+                agent_id: AgentId::new("agent-a"),
+                remote_connection: None,
+                sessions: vec![make_session(
                     "shared-session",
                     Some("From A"),
                     Some(paths.clone()),
                     None,
                     None,
                 )],
-            ),
-            (
-                AgentId::new("agent-b"),
-                vec![make_session(
+            },
+            SessionByAgent {
+                agent_id: AgentId::new("agent-b"),
+                remote_connection: None,
+                sessions: vec![make_session(
                     "shared-session",
                     Some("From B"),
                     Some(paths),
                     None,
                     None,
                 )],
-            ),
+            },
         ];
 
         let result = collect_importable_threads(sessions_by_agent, existing);
@@ -679,13 +714,14 @@ mod tests {
         let existing =
             HashSet::from_iter(vec![acp::SessionId::new("s1"), acp::SessionId::new("s2")]);
 
-        let sessions_by_agent = vec![(
-            AgentId::new("agent-a"),
-            vec![
+        let sessions_by_agent = vec![SessionByAgent {
+            agent_id: AgentId::new("agent-a"),
+            remote_connection: None,
+            sessions: vec![
                 make_session("s1", Some("T1"), Some(paths.clone()), None, None),
                 make_session("s2", Some("T2"), Some(paths), None, None),
             ],
-        )];
+        }];
 
         let result = collect_importable_threads(sessions_by_agent, existing);
         assert!(result.is_empty());

crates/agent_ui/src/thread_metadata_store.rs 🔗

@@ -10,31 +10,37 @@ use anyhow::Context as _;
 use chrono::{DateTime, Utc};
 use collections::{HashMap, HashSet};
 use db::{
+    kvp::KeyValueStore,
     sqlez::{
         bindable::Column, domain::Domain, statement::Statement,
         thread_safe_connection::ThreadSafeConnection,
     },
     sqlez_macros::sql,
 };
-use futures::{FutureExt as _, future::Shared};
+use fs::Fs;
+use futures::{FutureExt, future::Shared};
 use gpui::{AppContext as _, Entity, Global, Subscription, Task};
 use project::AgentId;
+use remote::RemoteConnectionOptions;
 use ui::{App, Context, SharedString};
 use util::ResultExt as _;
-use workspace::PathList;
+use workspace::{PathList, SerializedWorkspaceLocation, WorkspaceDb};
 
 use crate::DEFAULT_THREAD_TITLE;
 
+const THREAD_REMOTE_CONNECTION_MIGRATION_KEY: &str = "thread-metadata-remote-connection-backfill";
+
 pub fn init(cx: &mut App) {
     ThreadMetadataStore::init_global(cx);
-    migrate_thread_metadata(cx);
+    let migration_task = migrate_thread_metadata(cx);
+    migrate_thread_remote_connections(cx, migration_task);
 }
 
 /// Migrate existing thread metadata from native agent thread store to the new metadata storage.
 /// We skip migrating threads that do not have a project.
 ///
 /// TODO: Remove this after N weeks of shipping the sidebar
-fn migrate_thread_metadata(cx: &mut App) {
+fn migrate_thread_metadata(cx: &mut App) -> Task<anyhow::Result<()>> {
     let store = ThreadMetadataStore::global(cx);
     let db = store.read(cx).db.clone();
 
@@ -58,8 +64,8 @@ fn migrate_thread_metadata(cx: &mut App) {
                         title: entry.title,
                         updated_at: entry.updated_at,
                         created_at: entry.created_at,
-                        folder_paths: entry.folder_paths,
-                        main_worktree_paths: PathList::default(),
+                        worktree_paths: ThreadWorktreePaths::from_folder_paths(&entry.folder_paths),
+                        remote_connection: None,
                         archived: true,
                     })
                 })
@@ -75,11 +81,11 @@ fn migrate_thread_metadata(cx: &mut App) {
         if is_first_migration {
             let mut per_project: HashMap<PathList, Vec<&mut ThreadMetadata>> = HashMap::default();
             for entry in &mut to_migrate {
-                if entry.folder_paths.is_empty() {
+                if entry.worktree_paths.is_empty() {
                     continue;
                 }
                 per_project
-                    .entry(entry.folder_paths.clone())
+                    .entry(entry.worktree_paths.folder_path_list().clone())
                     .or_default()
                     .push(entry);
             }
@@ -104,12 +110,219 @@ fn migrate_thread_metadata(cx: &mut App) {
         let _ = store.update(cx, |store, cx| store.reload(cx));
         anyhow::Ok(())
     })
+}
+
+fn migrate_thread_remote_connections(cx: &mut App, migration_task: Task<anyhow::Result<()>>) {
+    let store = ThreadMetadataStore::global(cx);
+    let db = store.read(cx).db.clone();
+    let kvp = KeyValueStore::global(cx);
+    let workspace_db = WorkspaceDb::global(cx);
+    let fs = <dyn Fs>::global(cx);
+
+    cx.spawn(async move |cx| -> anyhow::Result<()> {
+        migration_task.await?;
+
+        if kvp
+            .read_kvp(THREAD_REMOTE_CONNECTION_MIGRATION_KEY)?
+            .is_some()
+        {
+            return Ok(());
+        }
+
+        let recent_workspaces = workspace_db.recent_workspaces_on_disk(fs.as_ref()).await?;
+
+        let mut local_path_lists = HashSet::<PathList>::default();
+        let mut remote_path_lists = HashMap::<PathList, RemoteConnectionOptions>::default();
+
+        recent_workspaces
+            .iter()
+            .filter(|(_, location, path_list, _)| {
+                !path_list.is_empty() && matches!(location, &SerializedWorkspaceLocation::Local)
+            })
+            .for_each(|(_, _, path_list, _)| {
+                local_path_lists.insert(path_list.clone());
+            });
+
+        for (_, location, path_list, _) in recent_workspaces {
+            match location {
+                SerializedWorkspaceLocation::Remote(remote_connection)
+                    if !local_path_lists.contains(&path_list) =>
+                {
+                    remote_path_lists
+                        .entry(path_list)
+                        .or_insert(remote_connection);
+                }
+                _ => {}
+            }
+        }
+
+        let mut reloaded = false;
+        for metadata in db.list()? {
+            if metadata.remote_connection.is_some() {
+                continue;
+            }
+
+            if let Some(remote_connection) = remote_path_lists
+                .get(metadata.folder_paths())
+                .or_else(|| remote_path_lists.get(metadata.main_worktree_paths()))
+            {
+                db.save(ThreadMetadata {
+                    remote_connection: Some(remote_connection.clone()),
+                    ..metadata
+                })
+                .await?;
+                reloaded = true;
+            }
+        }
+
+        let reloaded_task = reloaded
+            .then_some(store.update(cx, |store, cx| store.reload(cx)))
+            .unwrap_or(Task::ready(()).shared());
+
+        kvp.write_kvp(
+            THREAD_REMOTE_CONNECTION_MIGRATION_KEY.to_string(),
+            "1".to_string(),
+        )
+        .await?;
+        reloaded_task.await;
+
+        Ok(())
+    })
     .detach_and_log_err(cx);
 }
 
 struct GlobalThreadMetadataStore(Entity<ThreadMetadataStore>);
 impl Global for GlobalThreadMetadataStore {}
 
+/// Paired worktree paths for a thread. Each folder path has a corresponding
+/// main worktree path at the same position. The two lists are always the
+/// same length and are modified together via `add_path` / `remove_main_path`.
+///
+/// For non-linked worktrees, the main path and folder path are identical.
+/// For linked worktrees, the main path is the original repo and the folder
+/// path is the linked worktree location.
+///
+/// Internally stores two `PathList`s with matching insertion order so that
+/// `ordered_paths()` on both yields positionally-paired results.
+#[derive(Default, Debug, Clone)]
+pub struct ThreadWorktreePaths {
+    folder_paths: PathList,
+    main_worktree_paths: PathList,
+}
+
+impl PartialEq for ThreadWorktreePaths {
+    fn eq(&self, other: &Self) -> bool {
+        self.folder_paths == other.folder_paths
+            && self.main_worktree_paths == other.main_worktree_paths
+    }
+}
+
+impl ThreadWorktreePaths {
+    /// Build from a project's current state. Each visible worktree is paired
+    /// with its main repo path (resolved via git), falling back to the
+    /// worktree's own path if no git repo is found.
+    pub fn from_project(project: &project::Project, cx: &App) -> Self {
+        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = project
+            .visible_worktrees(cx)
+            .map(|worktree| {
+                let snapshot = worktree.read(cx).snapshot();
+                let folder_path = snapshot.abs_path().to_path_buf();
+                let main_path = snapshot
+                    .root_repo_common_dir()
+                    .and_then(|dir| Some(dir.parent()?.to_path_buf()))
+                    .unwrap_or_else(|| folder_path.clone());
+                (main_path, folder_path)
+            })
+            .unzip();
+        Self {
+            folder_paths: PathList::new(&folders),
+            main_worktree_paths: PathList::new(&mains),
+        }
+    }
+
+    /// Build from two parallel `PathList`s that already share the same
+    /// insertion order. Used for deserialization from DB.
+    ///
+    /// Returns an error if the two lists have different lengths, which
+    /// indicates corrupted data from a prior migration bug.
+    pub fn from_path_lists(
+        main_worktree_paths: PathList,
+        folder_paths: PathList,
+    ) -> anyhow::Result<Self> {
+        anyhow::ensure!(
+            main_worktree_paths.paths().len() == folder_paths.paths().len(),
+            "main_worktree_paths has {} entries but folder_paths has {}",
+            main_worktree_paths.paths().len(),
+            folder_paths.paths().len(),
+        );
+        Ok(Self {
+            folder_paths,
+            main_worktree_paths,
+        })
+    }
+
+    /// Build for non-linked worktrees where main == folder for every path.
+    pub fn from_folder_paths(folder_paths: &PathList) -> Self {
+        Self {
+            folder_paths: folder_paths.clone(),
+            main_worktree_paths: folder_paths.clone(),
+        }
+    }
+
+    pub fn is_empty(&self) -> bool {
+        self.folder_paths.is_empty()
+    }
+
+    /// The folder paths (for workspace matching / `threads_by_paths` index).
+    pub fn folder_path_list(&self) -> &PathList {
+        &self.folder_paths
+    }
+
+    /// The main worktree paths (for group key / `threads_by_main_paths` index).
+    pub fn main_worktree_path_list(&self) -> &PathList {
+        &self.main_worktree_paths
+    }
+
+    /// Iterate the (main_worktree_path, folder_path) pairs in insertion order.
+    pub fn ordered_pairs(&self) -> impl Iterator<Item = (&PathBuf, &PathBuf)> {
+        self.main_worktree_paths
+            .ordered_paths()
+            .zip(self.folder_paths.ordered_paths())
+    }
+
+    /// Add a new path pair. If the exact (main, folder) pair already exists,
+    /// this is a no-op. Rebuilds both internal `PathList`s to maintain
+    /// consistent ordering.
+    pub fn add_path(&mut self, main_path: &Path, folder_path: &Path) {
+        let already_exists = self
+            .ordered_pairs()
+            .any(|(m, f)| m.as_path() == main_path && f.as_path() == folder_path);
+        if already_exists {
+            return;
+        }
+        let (mut mains, mut folders): (Vec<PathBuf>, Vec<PathBuf>) = self
+            .ordered_pairs()
+            .map(|(m, f)| (m.clone(), f.clone()))
+            .unzip();
+        mains.push(main_path.to_path_buf());
+        folders.push(folder_path.to_path_buf());
+        self.main_worktree_paths = PathList::new(&mains);
+        self.folder_paths = PathList::new(&folders);
+    }
+
+    /// Remove all pairs whose main worktree path matches the given path.
+    /// This removes the corresponding entries from both lists.
+    pub fn remove_main_path(&mut self, main_path: &Path) {
+        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = self
+            .ordered_pairs()
+            .filter(|(m, _)| m.as_path() != main_path)
+            .map(|(m, f)| (m.clone(), f.clone()))
+            .unzip();
+        self.main_worktree_paths = PathList::new(&mains);
+        self.folder_paths = PathList::new(&folders);
+    }
+}
+
 /// Lightweight metadata for any thread (native or ACP), enough to populate
 /// the sidebar list and route to the correct load path when clicked.
 #[derive(Debug, Clone, PartialEq)]
@@ -119,16 +332,25 @@ pub struct ThreadMetadata {
     pub title: SharedString,
     pub updated_at: DateTime<Utc>,
     pub created_at: Option<DateTime<Utc>>,
-    pub folder_paths: PathList,
-    pub main_worktree_paths: PathList,
+    pub worktree_paths: ThreadWorktreePaths,
+    pub remote_connection: Option<RemoteConnectionOptions>,
     pub archived: bool,
 }
 
+impl ThreadMetadata {
+    pub fn folder_paths(&self) -> &PathList {
+        self.worktree_paths.folder_path_list()
+    }
+    pub fn main_worktree_paths(&self) -> &PathList {
+        self.worktree_paths.main_worktree_path_list()
+    }
+}
+
 impl From<&ThreadMetadata> for acp_thread::AgentSessionInfo {
     fn from(meta: &ThreadMetadata) -> Self {
         Self {
             session_id: meta.session_id.clone(),
-            work_dirs: Some(meta.folder_paths.clone()),
+            work_dirs: Some(meta.folder_paths().clone()),
             title: Some(meta.title.clone()),
             updated_at: Some(meta.updated_at),
             created_at: meta.created_at,
@@ -190,6 +412,7 @@ pub struct ThreadMetadataStore {
     reload_task: Option<Shared<Task<()>>>,
     session_subscriptions: HashMap<acp::SessionId, Subscription>,
     pending_thread_ops_tx: smol::channel::Sender<DbOperation>,
+    in_flight_archives: HashMap<acp::SessionId, (Task<()>, smol::channel::Sender<()>)>,
     _db_operations_task: Task<()>,
 }
 
@@ -311,12 +534,12 @@ impl ThreadMetadataStore {
 
                     for row in rows {
                         this.threads_by_paths
-                            .entry(row.folder_paths.clone())
+                            .entry(row.folder_paths().clone())
                             .or_default()
                             .insert(row.session_id.clone());
-                        if !row.main_worktree_paths.is_empty() {
+                        if !row.main_worktree_paths().is_empty() {
                             this.threads_by_main_paths
-                                .entry(row.main_worktree_paths.clone())
+                                .entry(row.main_worktree_paths().clone())
                                 .or_default()
                                 .insert(row.session_id.clone());
                         }
@@ -351,17 +574,17 @@ impl ThreadMetadataStore {
 
     fn save_internal(&mut self, metadata: ThreadMetadata) {
         if let Some(thread) = self.threads.get(&metadata.session_id) {
-            if thread.folder_paths != metadata.folder_paths {
-                if let Some(session_ids) = self.threads_by_paths.get_mut(&thread.folder_paths) {
+            if thread.folder_paths() != metadata.folder_paths() {
+                if let Some(session_ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
                     session_ids.remove(&metadata.session_id);
                 }
             }
-            if thread.main_worktree_paths != metadata.main_worktree_paths
-                && !thread.main_worktree_paths.is_empty()
+            if thread.main_worktree_paths() != metadata.main_worktree_paths()
+                && !thread.main_worktree_paths().is_empty()
             {
                 if let Some(session_ids) = self
                     .threads_by_main_paths
-                    .get_mut(&thread.main_worktree_paths)
+                    .get_mut(thread.main_worktree_paths())
                 {
                     session_ids.remove(&metadata.session_id);
                 }
@@ -372,13 +595,13 @@ impl ThreadMetadataStore {
             .insert(metadata.session_id.clone(), metadata.clone());
 
         self.threads_by_paths
-            .entry(metadata.folder_paths.clone())
+            .entry(metadata.folder_paths().clone())
             .or_default()
             .insert(metadata.session_id.clone());
 
-        if !metadata.main_worktree_paths.is_empty() {
+        if !metadata.main_worktree_paths().is_empty() {
             self.threads_by_main_paths
-                .entry(metadata.main_worktree_paths.clone())
+                .entry(metadata.main_worktree_paths().clone())
                 .or_default()
                 .insert(metadata.session_id.clone());
         }
@@ -396,19 +619,148 @@ impl ThreadMetadataStore {
     ) {
         if let Some(thread) = self.threads.get(session_id) {
             self.save_internal(ThreadMetadata {
-                folder_paths: work_dirs,
+                worktree_paths: ThreadWorktreePaths::from_path_lists(
+                    thread.main_worktree_paths().clone(),
+                    work_dirs.clone(),
+                )
+                .unwrap_or_else(|_| ThreadWorktreePaths::from_folder_paths(&work_dirs)),
                 ..thread.clone()
             });
             cx.notify();
         }
     }
 
-    pub fn archive(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
+    pub fn archive(
+        &mut self,
+        session_id: &acp::SessionId,
+        archive_job: Option<(Task<()>, smol::channel::Sender<()>)>,
+        cx: &mut Context<Self>,
+    ) {
         self.update_archived(session_id, true, cx);
+
+        if let Some(job) = archive_job {
+            self.in_flight_archives.insert(session_id.clone(), job);
+        }
     }
 
     pub fn unarchive(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
         self.update_archived(session_id, false, cx);
+        // Dropping the Sender triggers cancellation in the background task.
+        self.in_flight_archives.remove(session_id);
+    }
+
+    pub fn cleanup_completed_archive(&mut self, session_id: &acp::SessionId) {
+        self.in_flight_archives.remove(session_id);
+    }
+
+    /// Updates a thread's `folder_paths` after an archived worktree has been
+    /// restored to disk. The restored worktree may land at a different path
+    /// than it had before archival, so each `(old_path, new_path)` pair in
+    /// `path_replacements` is applied to the thread's stored folder paths.
+    pub fn update_restored_worktree_paths(
+        &mut self,
+        session_id: &acp::SessionId,
+        path_replacements: &[(PathBuf, PathBuf)],
+        cx: &mut Context<Self>,
+    ) {
+        if let Some(thread) = self.threads.get(session_id).cloned() {
+            let mut paths: Vec<PathBuf> = thread.folder_paths().paths().to_vec();
+            for (old_path, new_path) in path_replacements {
+                if let Some(pos) = paths.iter().position(|p| p == old_path) {
+                    paths[pos] = new_path.clone();
+                }
+            }
+            let new_folder_paths = PathList::new(&paths);
+            self.save_internal(ThreadMetadata {
+                worktree_paths: ThreadWorktreePaths::from_path_lists(
+                    thread.main_worktree_paths().clone(),
+                    new_folder_paths.clone(),
+                )
+                .unwrap_or_else(|_| ThreadWorktreePaths::from_folder_paths(&new_folder_paths)),
+                ..thread
+            });
+            cx.notify();
+        }
+    }
+
+    pub fn complete_worktree_restore(
+        &mut self,
+        session_id: &acp::SessionId,
+        path_replacements: &[(PathBuf, PathBuf)],
+        cx: &mut Context<Self>,
+    ) {
+        if let Some(thread) = self.threads.get(session_id).cloned() {
+            let mut paths: Vec<PathBuf> = thread.folder_paths().paths().to_vec();
+            for (old_path, new_path) in path_replacements {
+                for path in &mut paths {
+                    if path == old_path {
+                        *path = new_path.clone();
+                    }
+                }
+            }
+            let new_folder_paths = PathList::new(&paths);
+            self.save_internal(ThreadMetadata {
+                worktree_paths: ThreadWorktreePaths::from_path_lists(
+                    thread.main_worktree_paths().clone(),
+                    new_folder_paths.clone(),
+                )
+                .unwrap_or_else(|_| ThreadWorktreePaths::from_folder_paths(&new_folder_paths)),
+                ..thread
+            });
+            cx.notify();
+        }
+    }
+
+    /// Apply a mutation to the worktree paths of all threads whose current
+    /// `main_worktree_paths` matches `current_main_paths`, then re-index.
+    pub fn change_worktree_paths(
+        &mut self,
+        current_main_paths: &PathList,
+        mutate: impl Fn(&mut ThreadWorktreePaths),
+        cx: &mut Context<Self>,
+    ) {
+        let session_ids: Vec<_> = self
+            .threads_by_main_paths
+            .get(current_main_paths)
+            .into_iter()
+            .flatten()
+            .cloned()
+            .collect();
+
+        if session_ids.is_empty() {
+            return;
+        }
+
+        for session_id in &session_ids {
+            if let Some(thread) = self.threads.get_mut(session_id) {
+                if let Some(ids) = self
+                    .threads_by_main_paths
+                    .get_mut(thread.main_worktree_paths())
+                {
+                    ids.remove(session_id);
+                }
+                if let Some(ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
+                    ids.remove(session_id);
+                }
+
+                mutate(&mut thread.worktree_paths);
+
+                self.threads_by_main_paths
+                    .entry(thread.main_worktree_paths().clone())
+                    .or_default()
+                    .insert(session_id.clone());
+                self.threads_by_paths
+                    .entry(thread.folder_paths().clone())
+                    .or_default()
+                    .insert(session_id.clone());
+
+                self.pending_thread_ops_tx
+                    .try_send(DbOperation::Upsert(thread.clone()))
+                    .log_err();
+            }
+        }
+
+        cx.notify();
     }
 
     pub fn create_archived_worktree(
@@ -462,6 +814,30 @@ impl ThreadMetadataStore {
         cx.background_spawn(async move { db.delete_archived_worktree(id).await })
     }
 
+    pub fn unlink_thread_from_all_archived_worktrees(
+        &self,
+        session_id: String,
+        cx: &App,
+    ) -> Task<anyhow::Result<()>> {
+        let db = self.db.clone();
+        cx.background_spawn(async move {
+            db.unlink_thread_from_all_archived_worktrees(session_id)
+                .await
+        })
+    }
+
+    pub fn is_archived_worktree_referenced(
+        &self,
+        archived_worktree_id: i64,
+        cx: &App,
+    ) -> Task<anyhow::Result<bool>> {
+        let db = self.db.clone();
+        cx.background_spawn(async move {
+            db.is_archived_worktree_referenced(archived_worktree_id)
+                .await
+        })
+    }
+
     fn update_archived(
         &mut self,
         session_id: &acp::SessionId,
@@ -479,13 +855,13 @@ impl ThreadMetadataStore {
 
     pub fn delete(&mut self, session_id: acp::SessionId, cx: &mut Context<Self>) {
         if let Some(thread) = self.threads.get(&session_id) {
-            if let Some(session_ids) = self.threads_by_paths.get_mut(&thread.folder_paths) {
+            if let Some(session_ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
                 session_ids.remove(&session_id);
             }
-            if !thread.main_worktree_paths.is_empty() {
+            if !thread.main_worktree_paths().is_empty() {
                 if let Some(session_ids) = self
                     .threads_by_main_paths
-                    .get_mut(&thread.main_worktree_paths)
+                    .get_mut(thread.main_worktree_paths())
                 {
                     session_ids.remove(&session_id);
                 }
@@ -564,6 +940,7 @@ impl ThreadMetadataStore {
             reload_task: None,
             session_subscriptions: HashMap::default(),
             pending_thread_ops_tx: tx,
+            in_flight_archives: HashMap::default(),
             _db_operations_task,
         };
         let _ = this.reload(cx);
@@ -624,21 +1001,11 @@ impl ThreadMetadataStore {
 
                 let agent_id = thread_ref.connection().agent_id();
 
-                let folder_paths = {
-                    let project = thread_ref.project().read(cx);
-                    let paths: Vec<Arc<Path>> = project
-                        .visible_worktrees(cx)
-                        .map(|worktree| worktree.read(cx).abs_path())
-                        .collect();
-                    PathList::new(&paths)
-                };
+                let project = thread_ref.project().read(cx);
+                let worktree_paths = ThreadWorktreePaths::from_project(project, cx);
 
-                let main_worktree_paths = thread_ref
-                    .project()
-                    .read(cx)
-                    .project_group_key(cx)
-                    .path_list()
-                    .clone();
+                let project_group_key = project.project_group_key(cx);
+                let remote_connection = project_group_key.host();
 
                 // Threads without a folder path (e.g. started in an empty
                 // window) are archived by default so they don't get lost,
@@ -646,7 +1013,7 @@ impl ThreadMetadataStore {
                 // them from the archive.
                 let archived = existing_thread
                     .map(|t| t.archived)
-                    .unwrap_or(folder_paths.is_empty());
+                    .unwrap_or(worktree_paths.is_empty());
 
                 let metadata = ThreadMetadata {
                     session_id,
@@ -654,8 +1021,8 @@ impl ThreadMetadataStore {
                     title,
                     created_at: Some(created_at),
                     updated_at,
-                    folder_paths,
-                    main_worktree_paths,
+                    worktree_paths,
+                    remote_connection,
                     archived,
                 };
 
@@ -710,6 +1077,7 @@ impl Domain for ThreadMetadataDb {
                 PRIMARY KEY (session_id, archived_worktree_id)
             ) STRICT;
         ),
+        sql!(ALTER TABLE sidebar_threads ADD COLUMN remote_connection TEXT),
     ];
 }
 
@@ -726,7 +1094,7 @@ impl ThreadMetadataDb {
     /// List all sidebar thread metadata, ordered by updated_at descending.
     pub fn list(&self) -> anyhow::Result<Vec<ThreadMetadata>> {
         self.select::<ThreadMetadata>(
-            "SELECT session_id, agent_id, title, updated_at, created_at, folder_paths, folder_paths_order, archived, main_worktree_paths, main_worktree_paths_order \
+            "SELECT session_id, agent_id, title, updated_at, created_at, folder_paths, folder_paths_order, archived, main_worktree_paths, main_worktree_paths_order, remote_connection \
              FROM sidebar_threads \
              ORDER BY updated_at DESC"
         )?()
@@ -743,24 +1111,30 @@ impl ThreadMetadataDb {
         let title = row.title.to_string();
         let updated_at = row.updated_at.to_rfc3339();
         let created_at = row.created_at.map(|dt| dt.to_rfc3339());
-        let serialized = row.folder_paths.serialize();
-        let (folder_paths, folder_paths_order) = if row.folder_paths.is_empty() {
+        let serialized = row.folder_paths().serialize();
+        let (folder_paths, folder_paths_order) = if row.folder_paths().is_empty() {
             (None, None)
         } else {
             (Some(serialized.paths), Some(serialized.order))
         };
-        let main_serialized = row.main_worktree_paths.serialize();
-        let (main_worktree_paths, main_worktree_paths_order) = if row.main_worktree_paths.is_empty()
-        {
-            (None, None)
-        } else {
-            (Some(main_serialized.paths), Some(main_serialized.order))
-        };
+        let main_serialized = row.main_worktree_paths().serialize();
+        let (main_worktree_paths, main_worktree_paths_order) =
+            if row.main_worktree_paths().is_empty() {
+                (None, None)
+            } else {
+                (Some(main_serialized.paths), Some(main_serialized.order))
+            };
+        let remote_connection = row
+            .remote_connection
+            .as_ref()
+            .map(serde_json::to_string)
+            .transpose()
+            .context("serialize thread metadata remote connection")?;
         let archived = row.archived;
 
         self.write(move |conn| {
-            let sql = "INSERT INTO sidebar_threads(session_id, agent_id, title, updated_at, created_at, folder_paths, folder_paths_order, archived, main_worktree_paths, main_worktree_paths_order) \
-                       VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10) \
+            let sql = "INSERT INTO sidebar_threads(session_id, agent_id, title, updated_at, created_at, folder_paths, folder_paths_order, archived, main_worktree_paths, main_worktree_paths_order, remote_connection) \
+                       VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11) \
                        ON CONFLICT(session_id) DO UPDATE SET \
                            agent_id = excluded.agent_id, \
                            title = excluded.title, \
@@ -770,7 +1144,8 @@ impl ThreadMetadataDb {
                            folder_paths_order = excluded.folder_paths_order, \
                            archived = excluded.archived, \
                            main_worktree_paths = excluded.main_worktree_paths, \
-                           main_worktree_paths_order = excluded.main_worktree_paths_order";
+                           main_worktree_paths_order = excluded.main_worktree_paths_order, \
+                           remote_connection = excluded.remote_connection";
             let mut stmt = Statement::prepare(conn, sql)?;
             let mut i = stmt.bind(&id, 1)?;
             i = stmt.bind(&agent_id, i)?;
@@ -781,7 +1156,8 @@ impl ThreadMetadataDb {
             i = stmt.bind(&folder_paths_order, i)?;
             i = stmt.bind(&archived, i)?;
             i = stmt.bind(&main_worktree_paths, i)?;
-            stmt.bind(&main_worktree_paths_order, i)?;
+            i = stmt.bind(&main_worktree_paths_order, i)?;
+            stmt.bind(&remote_connection, i)?;
             stmt.exec()
         })
         .await
@@ -872,6 +1248,31 @@ impl ThreadMetadataDb {
         })
         .await
     }
+
+    pub async fn unlink_thread_from_all_archived_worktrees(
+        &self,
+        session_id: String,
+    ) -> anyhow::Result<()> {
+        self.write(move |conn| {
+            let mut stmt = Statement::prepare(
+                conn,
+                "DELETE FROM thread_archived_worktrees WHERE session_id = ?",
+            )?;
+            stmt.bind(&session_id, 1)?;
+            stmt.exec()
+        })
+        .await
+    }
+
+    pub async fn is_archived_worktree_referenced(
+        &self,
+        archived_worktree_id: i64,
+    ) -> anyhow::Result<bool> {
+        self.select_row_bound::<i64, i64>(
+            "SELECT COUNT(*) FROM thread_archived_worktrees WHERE archived_worktree_id = ?1",
+        )?(archived_worktree_id)
+        .map(|count| count.unwrap_or(0) > 0)
+    }
 }
 
 impl Column for ThreadMetadata {
@@ -889,6 +1290,8 @@ impl Column for ThreadMetadata {
             Column::column(statement, next)?;
         let (main_worktree_paths_order_str, next): (Option<String>, i32) =
             Column::column(statement, next)?;
+        let (remote_connection_json, next): (Option<String>, i32) =
+            Column::column(statement, next)?;
 
         let agent_id = agent_id
             .map(|id| AgentId::new(id))
@@ -919,6 +1322,16 @@ impl Column for ThreadMetadata {
             })
             .unwrap_or_default();
 
+        let remote_connection = remote_connection_json
+            .as_deref()
+            .map(serde_json::from_str::<RemoteConnectionOptions>)
+            .transpose()
+            .context("deserialize thread metadata remote connection")?;
+
+        let worktree_paths =
+            ThreadWorktreePaths::from_path_lists(main_worktree_paths, folder_paths)
+                .unwrap_or_else(|_| ThreadWorktreePaths::default());
+
         Ok((
             ThreadMetadata {
                 session_id: acp::SessionId::new(id),
@@ -926,8 +1339,8 @@ impl Column for ThreadMetadata {
                 title: title.into(),
                 updated_at,
                 created_at,
-                folder_paths,
-                main_worktree_paths,
+                worktree_paths,
+                remote_connection,
                 archived,
             },
             next,
@@ -971,6 +1384,7 @@ mod tests {
     use gpui::TestAppContext;
     use project::FakeFs;
     use project::Project;
+    use remote::WslConnectionOptions;
     use std::path::Path;
     use std::rc::Rc;
 
@@ -1008,21 +1422,38 @@ mod tests {
             title: title.to_string().into(),
             updated_at,
             created_at: Some(updated_at),
-            folder_paths,
-            main_worktree_paths: PathList::default(),
+            worktree_paths: ThreadWorktreePaths::from_folder_paths(&folder_paths),
+            remote_connection: None,
         }
     }
 
     fn init_test(cx: &mut TestAppContext) {
+        let fs = FakeFs::new(cx.executor());
         cx.update(|cx| {
             let settings_store = settings::SettingsStore::test(cx);
             cx.set_global(settings_store);
+            <dyn Fs>::set_global(fs, cx);
             ThreadMetadataStore::init_global(cx);
             ThreadStore::init_global(cx);
         });
         cx.run_until_parked();
     }
 
+    fn clear_thread_metadata_remote_connection_backfill(cx: &mut TestAppContext) {
+        let kvp = cx.update(|cx| KeyValueStore::global(cx));
+        smol::block_on(kvp.delete_kvp("thread-metadata-remote-connection-backfill".to_string()))
+            .unwrap();
+    }
+
+    fn run_thread_metadata_migrations(cx: &mut TestAppContext) {
+        clear_thread_metadata_remote_connection_backfill(cx);
+        cx.update(|cx| {
+            let migration_task = migrate_thread_metadata(cx);
+            migrate_thread_remote_connections(cx, migration_task);
+        });
+        cx.run_until_parked();
+    }
+
     #[gpui::test]
     async fn test_store_initializes_cache_from_database(cx: &mut TestAppContext) {
         let first_paths = PathList::new(&[Path::new("/project-a")]);
@@ -1222,8 +1653,8 @@ mod tests {
             title: "Existing Metadata".into(),
             updated_at: now - chrono::Duration::seconds(10),
             created_at: Some(now - chrono::Duration::seconds(10)),
-            folder_paths: project_a_paths.clone(),
-            main_worktree_paths: PathList::default(),
+            worktree_paths: ThreadWorktreePaths::from_folder_paths(&project_a_paths),
+            remote_connection: None,
             archived: false,
         };
 
@@ -1281,8 +1712,7 @@ mod tests {
             cx.run_until_parked();
         }
 
-        cx.update(|cx| migrate_thread_metadata(cx));
-        cx.run_until_parked();
+        run_thread_metadata_migrations(cx);
 
         let list = cx.update(|cx| {
             let store = ThreadMetadataStore::global(cx);
@@ -1332,8 +1762,8 @@ mod tests {
             title: "Existing Metadata".into(),
             updated_at: existing_updated_at,
             created_at: Some(existing_updated_at),
-            folder_paths: project_paths.clone(),
-            main_worktree_paths: PathList::default(),
+            worktree_paths: ThreadWorktreePaths::from_folder_paths(&project_paths),
+            remote_connection: None,
             archived: false,
         };
 
@@ -1362,8 +1792,7 @@ mod tests {
         save_task.await.unwrap();
         cx.run_until_parked();
 
-        cx.update(|cx| migrate_thread_metadata(cx));
-        cx.run_until_parked();
+        run_thread_metadata_migrations(cx);
 
         let list = cx.update(|cx| {
             let store = ThreadMetadataStore::global(cx);
@@ -1374,6 +1803,82 @@ mod tests {
         assert_eq!(list[0].session_id.0.as_ref(), "existing-session");
     }
 
+    #[gpui::test]
+    async fn test_migrate_thread_remote_connections_backfills_from_workspace_db(
+        cx: &mut TestAppContext,
+    ) {
+        init_test(cx);
+
+        let folder_paths = PathList::new(&[Path::new("/remote-project")]);
+        let updated_at = Utc::now();
+        let metadata = make_metadata(
+            "remote-session",
+            "Remote Thread",
+            updated_at,
+            folder_paths.clone(),
+        );
+
+        cx.update(|cx| {
+            let store = ThreadMetadataStore::global(cx);
+            store.update(cx, |store, cx| {
+                store.save(metadata, cx);
+            });
+        });
+        cx.run_until_parked();
+
+        let workspace_db = cx.update(|cx| WorkspaceDb::global(cx));
+        let workspace_id = workspace_db.next_id().await.unwrap();
+        let serialized_paths = folder_paths.serialize();
+        let remote_connection_id = 1_i64;
+        workspace_db
+            .write(move |conn| {
+                let mut stmt = Statement::prepare(
+                    conn,
+                    "INSERT INTO remote_connections(id, kind, user, distro) VALUES (?1, ?2, ?3, ?4)",
+                )?;
+                let mut next_index = stmt.bind(&remote_connection_id, 1)?;
+                next_index = stmt.bind(&"wsl", next_index)?;
+                next_index = stmt.bind(&Some("anth".to_string()), next_index)?;
+                stmt.bind(&Some("Ubuntu".to_string()), next_index)?;
+                stmt.exec()?;
+
+                let mut stmt = Statement::prepare(
+                    conn,
+                    "UPDATE workspaces SET paths = ?2, paths_order = ?3, remote_connection_id = ?4, timestamp = CURRENT_TIMESTAMP WHERE workspace_id = ?1",
+                )?;
+                let mut next_index = stmt.bind(&workspace_id, 1)?;
+                next_index = stmt.bind(&serialized_paths.paths, next_index)?;
+                next_index = stmt.bind(&serialized_paths.order, next_index)?;
+                stmt.bind(&Some(remote_connection_id as i32), next_index)?;
+                stmt.exec()
+            })
+            .await
+            .unwrap();
+
+        clear_thread_metadata_remote_connection_backfill(cx);
+        cx.update(|cx| {
+            migrate_thread_remote_connections(cx, Task::ready(Ok(())));
+        });
+        cx.run_until_parked();
+
+        let metadata = cx.update(|cx| {
+            let store = ThreadMetadataStore::global(cx);
+            store
+                .read(cx)
+                .entry(&acp::SessionId::new("remote-session"))
+                .cloned()
+                .expect("expected migrated metadata row")
+        });
+
+        assert_eq!(
+            metadata.remote_connection,
+            Some(RemoteConnectionOptions::Wsl(WslConnectionOptions {
+                distro_name: "Ubuntu".to_string(),
+                user: Some("anth".to_string()),
+            }))
+        );
+    }
+
     #[gpui::test]
     async fn test_migrate_thread_metadata_archives_beyond_five_most_recent_per_project(
         cx: &mut TestAppContext,
@@ -1422,8 +1927,7 @@ mod tests {
             cx.run_until_parked();
         }
 
-        cx.update(|cx| migrate_thread_metadata(cx));
-        cx.run_until_parked();
+        run_thread_metadata_migrations(cx);
 
         let list = cx.update(|cx| {
             let store = ThreadMetadataStore::global(cx);
@@ -1435,7 +1939,7 @@ mod tests {
         // Project A: 5 most recent should be unarchived, 2 oldest should be archived
         let mut project_a_entries: Vec<_> = list
             .iter()
-            .filter(|m| m.folder_paths == project_a_paths)
+            .filter(|m| *m.folder_paths() == project_a_paths)
             .collect();
         assert_eq!(project_a_entries.len(), 7);
         project_a_entries.sort_by(|a, b| b.updated_at.cmp(&a.updated_at));
@@ -1458,7 +1962,7 @@ mod tests {
         // Project B: all 3 should be unarchived (under the limit)
         let project_b_entries: Vec<_> = list
             .iter()
-            .filter(|m| m.folder_paths == project_b_paths)
+            .filter(|m| *m.folder_paths() == project_b_paths)
             .collect();
         assert_eq!(project_b_entries.len(), 3);
         assert!(project_b_entries.iter().all(|m| !m.archived));
@@ -1622,7 +2126,7 @@ mod tests {
             let without_worktree = store
                 .entry(&session_without_worktree)
                 .expect("missing metadata for thread without project association");
-            assert!(without_worktree.folder_paths.is_empty());
+            assert!(without_worktree.folder_paths().is_empty());
             assert!(
                 without_worktree.archived,
                 "expected thread without project association to be archived"
@@ -1632,7 +2136,7 @@ mod tests {
                 .entry(&session_with_worktree)
                 .expect("missing metadata for thread with project association");
             assert_eq!(
-                with_worktree.folder_paths,
+                *with_worktree.folder_paths(),
                 PathList::new(&[Path::new("/project-a")])
             );
             assert!(

crates/agent_ui/src/thread_worktree_archive.rs 🔗

@@ -0,0 +1,728 @@
+use std::{
+    path::{Path, PathBuf},
+    sync::Arc,
+};
+
+use agent_client_protocol as acp;
+use anyhow::{Context as _, Result, anyhow};
+use gpui::{App, AsyncApp, Entity, Task};
+use project::{
+    LocalProjectFlags, Project, WorktreeId,
+    git_store::{Repository, resolve_git_worktree_to_main_repo},
+};
+use util::ResultExt;
+use workspace::{AppState, MultiWorkspace, Workspace};
+
+use crate::thread_metadata_store::{ArchivedGitWorktree, ThreadMetadataStore};
+
+/// The plan for archiving a single git worktree root.
+///
+/// A thread can have multiple folder paths open, so there may be multiple
+/// `RootPlan`s per archival operation. Each one captures everything needed to
+/// persist the worktree's git state and then remove it from disk.
+///
+/// All fields are gathered synchronously by [`build_root_plan`] while the
+/// worktree is still loaded in open projects. This is important because
+/// workspace removal tears down project and repository entities, making
+/// them unavailable for the later async persist/remove steps.
+#[derive(Clone)]
+pub struct RootPlan {
+    /// Absolute path of the git worktree on disk.
+    pub root_path: PathBuf,
+    /// Absolute path to the main git repository this worktree is linked to.
+    /// Used both for creating a git ref to prevent GC of WIP commits during
+    /// [`persist_worktree_state`], and for `git worktree remove` during
+    /// [`remove_root`].
+    pub main_repo_path: PathBuf,
+    /// Every open `Project` that has this worktree loaded, so they can all
+    /// call `remove_worktree` and release it during [`remove_root`].
+    /// Multiple projects can reference the same path when the user has the
+    /// worktree open in more than one workspace.
+    pub affected_projects: Vec<AffectedProject>,
+    /// The `Repository` entity for this worktree, used to run git commands
+    /// (create WIP commits, stage files, reset) during
+    /// [`persist_worktree_state`]. `None` when the `GitStore` hasn't created
+    /// a `Repository` for this worktree yet — in that case,
+    /// `persist_worktree_state` falls back to creating a temporary headless
+    /// project to obtain one.
+    pub worktree_repo: Option<Entity<Repository>>,
+    /// The branch the worktree was on, so it can be restored later.
+    /// `None` if the worktree was in detached HEAD state or if no
+    /// `Repository` entity was available at planning time (in which case
+    /// `persist_worktree_state` reads it from the repo snapshot instead).
+    pub branch_name: Option<String>,
+}
+
+/// A `Project` that references a worktree being archived, paired with the
+/// `WorktreeId` it uses for that worktree.
+///
+/// The same worktree path can appear in multiple open workspaces/projects
+/// (e.g. when the user has two windows open that both include the same
+/// linked worktree). Each one needs to call `remove_worktree` and wait for
+/// the release during [`remove_root`], otherwise the project would still
+/// hold a reference to the directory and `git worktree remove` would fail.
+#[derive(Clone)]
+pub struct AffectedProject {
+    pub project: Entity<Project>,
+    pub worktree_id: WorktreeId,
+}
+
+fn archived_worktree_ref_name(id: i64) -> String {
+    format!("refs/archived-worktrees/{}", id)
+}
+
+/// Builds a [`RootPlan`] for archiving the git worktree at `path`.
+///
+/// This is a synchronous planning step that must run *before* any workspace
+/// removal, because it needs live project and repository entities that are
+/// torn down when a workspace is removed. It does three things:
+///
+/// 1. Finds every `Project` across all open workspaces that has this
+///    worktree loaded (`affected_projects`).
+/// 2. Looks for a `Repository` entity whose snapshot identifies this path
+///    as a linked worktree (`worktree_repo`), which is needed for the git
+///    operations in [`persist_worktree_state`].
+/// 3. Determines the `main_repo_path` — the parent repo that owns this
+///    linked worktree — needed for both git ref creation and
+///    `git worktree remove`.
+///
+/// When no `Repository` entity is available (e.g. the `GitStore` hasn't
+/// finished scanning), the function falls back to deriving `main_repo_path`
+/// from the worktree snapshot's `root_repo_common_dir`. In that case
+/// `worktree_repo` is `None` and [`persist_worktree_state`] will create a
+/// temporary headless project to obtain one.
+///
+/// Returns `None` if no open project has this path as a visible worktree.
+pub fn build_root_plan(
+    path: &Path,
+    workspaces: &[Entity<Workspace>],
+    cx: &App,
+) -> Option<RootPlan> {
+    let path = path.to_path_buf();
+
+    let affected_projects = workspaces
+        .iter()
+        .filter_map(|workspace| {
+            let project = workspace.read(cx).project().clone();
+            let worktree = project
+                .read(cx)
+                .visible_worktrees(cx)
+                .find(|worktree| worktree.read(cx).abs_path().as_ref() == path.as_path())?;
+            let worktree_id = worktree.read(cx).id();
+            Some(AffectedProject {
+                project,
+                worktree_id,
+            })
+        })
+        .collect::<Vec<_>>();
+
+    if affected_projects.is_empty() {
+        return None;
+    }
+
+    let linked_repo = workspaces
+        .iter()
+        .flat_map(|workspace| {
+            workspace
+                .read(cx)
+                .project()
+                .read(cx)
+                .repositories(cx)
+                .values()
+                .cloned()
+                .collect::<Vec<_>>()
+        })
+        .find_map(|repo| {
+            let snapshot = repo.read(cx).snapshot();
+            (snapshot.is_linked_worktree()
+                && snapshot.work_directory_abs_path.as_ref() == path.as_path())
+            .then_some((snapshot, repo))
+        });
+
+    let matching_worktree_snapshot = workspaces.iter().find_map(|workspace| {
+        workspace
+            .read(cx)
+            .project()
+            .read(cx)
+            .visible_worktrees(cx)
+            .find(|worktree| worktree.read(cx).abs_path().as_ref() == path.as_path())
+            .map(|worktree| worktree.read(cx).snapshot())
+    });
+
+    let (main_repo_path, worktree_repo, branch_name) =
+        if let Some((linked_snapshot, repo)) = linked_repo {
+            (
+                linked_snapshot.original_repo_abs_path.to_path_buf(),
+                Some(repo),
+                linked_snapshot
+                    .branch
+                    .as_ref()
+                    .map(|branch| branch.name().to_string()),
+            )
+        } else {
+            let main_repo_path = matching_worktree_snapshot
+                .as_ref()?
+                .root_repo_common_dir()
+                .and_then(|dir| dir.parent())?
+                .to_path_buf();
+            (main_repo_path, None, None)
+        };
+
+    Some(RootPlan {
+        root_path: path,
+        main_repo_path,
+        affected_projects,
+        worktree_repo,
+        branch_name,
+    })
+}
+
+/// Returns `true` if any unarchived thread other than `current_session_id`
+/// references `path` in its folder paths. Used to determine whether a
+/// worktree can safely be removed from disk.
+pub fn path_is_referenced_by_other_unarchived_threads(
+    current_session_id: &acp::SessionId,
+    path: &Path,
+    cx: &App,
+) -> bool {
+    ThreadMetadataStore::global(cx)
+        .read(cx)
+        .entries()
+        .filter(|thread| thread.session_id != *current_session_id)
+        .filter(|thread| !thread.archived)
+        .any(|thread| {
+            thread
+                .folder_paths()
+                .paths()
+                .iter()
+                .any(|other_path| other_path.as_path() == path)
+        })
+}
+
+/// Removes a worktree from all affected projects and deletes it from disk
+/// via `git worktree remove`.
+///
+/// This is the destructive counterpart to [`persist_worktree_state`]. It
+/// first detaches the worktree from every [`AffectedProject`], waits for
+/// each project to fully release it, then asks the main repository to
+/// delete the worktree directory. If the git removal fails, the worktree
+/// is re-added to each project via [`rollback_root`].
+pub async fn remove_root(root: RootPlan, cx: &mut AsyncApp) -> Result<()> {
+    let release_tasks: Vec<_> = root
+        .affected_projects
+        .iter()
+        .map(|affected| {
+            let project = affected.project.clone();
+            let worktree_id = affected.worktree_id;
+            project.update(cx, |project, cx| {
+                let wait = project.wait_for_worktree_release(worktree_id, cx);
+                project.remove_worktree(worktree_id, cx);
+                wait
+            })
+        })
+        .collect();
+
+    if let Err(error) = remove_root_after_worktree_removal(&root, release_tasks, cx).await {
+        rollback_root(&root, cx).await;
+        return Err(error);
+    }
+
+    Ok(())
+}
+
+async fn remove_root_after_worktree_removal(
+    root: &RootPlan,
+    release_tasks: Vec<Task<Result<()>>>,
+    cx: &mut AsyncApp,
+) -> Result<()> {
+    for task in release_tasks {
+        if let Err(error) = task.await {
+            log::error!("Failed waiting for worktree release: {error:#}");
+        }
+    }
+
+    let (repo, _temp_project) = find_or_create_repository(&root.main_repo_path, cx).await?;
+    // force=true is required because the working directory is still dirty
+    // — persist_worktree_state captures state into detached commits without
+    // modifying the real index or working tree, so git refuses to delete
+    // the worktree without --force.
+    let receiver = repo.update(cx, |repo: &mut Repository, _cx| {
+        repo.remove_worktree(root.root_path.clone(), true)
+    });
+    let result = receiver
+        .await
+        .map_err(|_| anyhow!("git worktree removal was canceled"))?;
+    // Keep _temp_project alive until after the await so the headless project isn't dropped mid-operation
+    drop(_temp_project);
+    result
+}
+
+/// Finds a live `Repository` entity for the given path, or creates a temporary
+/// `Project::local` to obtain one.
+///
+/// `Repository` entities can only be obtained through a `Project` because
+/// `GitStore` (which creates and manages `Repository` entities) is owned by
+/// `Project`. When no open workspace contains the repo we need, we spin up a
+/// headless `Project::local` just to get a `Repository` handle. The caller
+/// keeps the returned `Option<Entity<Project>>` alive for the duration of the
+/// git operations, then drops it.
+///
+/// Future improvement: decoupling `GitStore` from `Project` so that
+/// `Repository` entities can be created standalone would eliminate this
+/// temporary-project workaround.
+async fn find_or_create_repository(
+    repo_path: &Path,
+    cx: &mut AsyncApp,
+) -> Result<(Entity<Repository>, Option<Entity<Project>>)> {
+    let repo_path_owned = repo_path.to_path_buf();
+    let live_repo = cx.update(|cx| {
+        all_open_workspaces(cx)
+            .into_iter()
+            .flat_map(|workspace| {
+                workspace
+                    .read(cx)
+                    .project()
+                    .read(cx)
+                    .repositories(cx)
+                    .values()
+                    .cloned()
+                    .collect::<Vec<_>>()
+            })
+            .find(|repo| {
+                repo.read(cx).snapshot().work_directory_abs_path.as_ref()
+                    == repo_path_owned.as_path()
+            })
+    });
+
+    if let Some(repo) = live_repo {
+        return Ok((repo, None));
+    }
+
+    let app_state =
+        current_app_state(cx).context("no app state available for temporary project")?;
+    let temp_project = cx.update(|cx| {
+        Project::local(
+            app_state.client.clone(),
+            app_state.node_runtime.clone(),
+            app_state.user_store.clone(),
+            app_state.languages.clone(),
+            app_state.fs.clone(),
+            None,
+            LocalProjectFlags::default(),
+            cx,
+        )
+    });
+
+    let repo_path_for_worktree = repo_path.to_path_buf();
+    let create_worktree = temp_project.update(cx, |project, cx| {
+        project.create_worktree(repo_path_for_worktree, true, cx)
+    });
+    let _worktree = create_worktree.await?;
+    let initial_scan = temp_project.read_with(cx, |project, cx| project.wait_for_initial_scan(cx));
+    initial_scan.await;
+
+    let repo_path_for_find = repo_path.to_path_buf();
+    let repo = temp_project
+        .update(cx, |project, cx| {
+            project
+                .repositories(cx)
+                .values()
+                .find(|repo| {
+                    repo.read(cx).snapshot().work_directory_abs_path.as_ref()
+                        == repo_path_for_find.as_path()
+                })
+                .cloned()
+        })
+        .context("failed to resolve temporary repository handle")?;
+
+    let barrier = repo.update(cx, |repo: &mut Repository, _cx| repo.barrier());
+    barrier
+        .await
+        .map_err(|_| anyhow!("temporary repository barrier canceled"))?;
+    Ok((repo, Some(temp_project)))
+}
+
+/// Re-adds the worktree to every affected project after a failed
+/// [`remove_root`].
+async fn rollback_root(root: &RootPlan, cx: &mut AsyncApp) {
+    for affected in &root.affected_projects {
+        let task = affected.project.update(cx, |project, cx| {
+            project.create_worktree(root.root_path.clone(), true, cx)
+        });
+        task.await.log_err();
+    }
+}
+
+/// Saves the worktree's full git state so it can be restored later.
+///
+/// This creates two detached commits (via [`create_archive_checkpoint`] on
+/// the `GitRepository` trait) that capture the staged and unstaged state
+/// without moving any branch ref. The commits are:
+///   - "WIP staged": a tree matching the current index, parented on HEAD
+///   - "WIP unstaged": a tree with all files (including untracked),
+///     parented on the staged commit
+///
+/// After creating the commits, this function:
+///   1. Records the commit SHAs, branch name, and paths in a DB record.
+///   2. Links every thread referencing this worktree to that record.
+///   3. Creates a git ref on the main repo to prevent GC of the commits.
+///
+/// On success, returns the archived worktree DB row ID for rollback.
+pub async fn persist_worktree_state(root: &RootPlan, cx: &mut AsyncApp) -> Result<i64> {
+    let (worktree_repo, _temp_worktree_project) = match &root.worktree_repo {
+        Some(worktree_repo) => (worktree_repo.clone(), None),
+        None => find_or_create_repository(&root.root_path, cx).await?,
+    };
+
+    let original_commit_hash = worktree_repo
+        .update(cx, |repo, _cx| repo.head_sha())
+        .await
+        .map_err(|_| anyhow!("head_sha canceled"))?
+        .context("failed to read original HEAD SHA")?
+        .context("HEAD SHA is None")?;
+
+    // Create two detached WIP commits without moving the branch.
+    let checkpoint_rx = worktree_repo.update(cx, |repo, _cx| repo.create_archive_checkpoint());
+    let (staged_commit_hash, unstaged_commit_hash) = checkpoint_rx
+        .await
+        .map_err(|_| anyhow!("create_archive_checkpoint canceled"))?
+        .context("failed to create archive checkpoint")?;
+
+    // Create DB record
+    let store = cx.update(|cx| ThreadMetadataStore::global(cx));
+    let worktree_path_str = root.root_path.to_string_lossy().to_string();
+    let main_repo_path_str = root.main_repo_path.to_string_lossy().to_string();
+    let branch_name = root.branch_name.clone().or_else(|| {
+        worktree_repo.read_with(cx, |repo, _cx| {
+            repo.snapshot()
+                .branch
+                .as_ref()
+                .map(|branch| branch.name().to_string())
+        })
+    });
+
+    let db_result = store
+        .read_with(cx, |store, cx| {
+            store.create_archived_worktree(
+                worktree_path_str.clone(),
+                main_repo_path_str.clone(),
+                branch_name.clone(),
+                staged_commit_hash.clone(),
+                unstaged_commit_hash.clone(),
+                original_commit_hash.clone(),
+                cx,
+            )
+        })
+        .await
+        .context("failed to create archived worktree DB record");
+    let archived_worktree_id = match db_result {
+        Ok(id) => id,
+        Err(error) => {
+            return Err(error);
+        }
+    };
+
+    // Link all threads on this worktree to the archived record
+    let session_ids: Vec<acp::SessionId> = store.read_with(cx, |store, _cx| {
+        store
+            .entries()
+            .filter(|thread| {
+                thread
+                    .folder_paths()
+                    .paths()
+                    .iter()
+                    .any(|p| p.as_path() == root.root_path)
+            })
+            .map(|thread| thread.session_id.clone())
+            .collect()
+    });
+
+    for session_id in &session_ids {
+        let link_result = store
+            .read_with(cx, |store, cx| {
+                store.link_thread_to_archived_worktree(
+                    session_id.0.to_string(),
+                    archived_worktree_id,
+                    cx,
+                )
+            })
+            .await;
+        if let Err(error) = link_result {
+            if let Err(delete_error) = store
+                .read_with(cx, |store, cx| {
+                    store.delete_archived_worktree(archived_worktree_id, cx)
+                })
+                .await
+            {
+                log::error!(
+                    "Failed to delete archived worktree DB record during link rollback: \
+                     {delete_error:#}"
+                );
+            }
+            return Err(error.context("failed to link thread to archived worktree"));
+        }
+    }
+
+    // Create git ref on main repo to prevent GC of the detached commits.
+    // This is fatal: without the ref, git gc will eventually collect the
+    // WIP commits and a later restore will silently fail.
+    let ref_name = archived_worktree_ref_name(archived_worktree_id);
+    let (main_repo, _temp_project) = find_or_create_repository(&root.main_repo_path, cx)
+        .await
+        .context("could not open main repo to create archive ref")?;
+    let rx = main_repo.update(cx, |repo, _cx| {
+        repo.update_ref(ref_name.clone(), unstaged_commit_hash.clone())
+    });
+    rx.await
+        .map_err(|_| anyhow!("update_ref canceled"))
+        .and_then(|r| r)
+        .with_context(|| format!("failed to create ref {ref_name} on main repo"))?;
+    drop(_temp_project);
+
+    Ok(archived_worktree_id)
+}
+
+/// Undoes a successful [`persist_worktree_state`] by deleting the git ref
+/// on the main repo and removing the DB record. Since the WIP commits are
+/// detached (they don't move any branch), no git reset is needed — the
+/// commits will be garbage-collected once the ref is removed.
+pub async fn rollback_persist(archived_worktree_id: i64, root: &RootPlan, cx: &mut AsyncApp) {
+    // Delete the git ref on main repo
+    if let Ok((main_repo, _temp_project)) =
+        find_or_create_repository(&root.main_repo_path, cx).await
+    {
+        let ref_name = archived_worktree_ref_name(archived_worktree_id);
+        let rx = main_repo.update(cx, |repo, _cx| repo.delete_ref(ref_name));
+        rx.await.ok().and_then(|r| r.log_err());
+        drop(_temp_project);
+    }
+
+    // Delete the DB record
+    let store = cx.update(|cx| ThreadMetadataStore::global(cx));
+    if let Err(error) = store
+        .read_with(cx, |store, cx| {
+            store.delete_archived_worktree(archived_worktree_id, cx)
+        })
+        .await
+    {
+        log::error!("Failed to delete archived worktree DB record during rollback: {error:#}");
+    }
+}
+
+/// Restores a previously archived worktree back to disk from its DB record.
+///
+/// Creates the git worktree at the original commit (the branch never moved
+/// during archival since WIP commits are detached), switches to the branch,
+/// then uses [`restore_archive_checkpoint`] to reconstruct the staged/
+/// unstaged state from the WIP commit trees.
+pub async fn restore_worktree_via_git(
+    row: &ArchivedGitWorktree,
+    cx: &mut AsyncApp,
+) -> Result<PathBuf> {
+    let (main_repo, _temp_project) = find_or_create_repository(&row.main_repo_path, cx).await?;
+
+    let worktree_path = &row.worktree_path;
+    let app_state = current_app_state(cx).context("no app state available")?;
+    let already_exists = app_state.fs.metadata(worktree_path).await?.is_some();
+
+    let created_new_worktree = if already_exists {
+        let is_git_worktree =
+            resolve_git_worktree_to_main_repo(app_state.fs.as_ref(), worktree_path)
+                .await
+                .is_some();
+
+        if !is_git_worktree {
+            let rx = main_repo.update(cx, |repo, _cx| repo.repair_worktrees());
+            rx.await
+                .map_err(|_| anyhow!("worktree repair was canceled"))?
+                .context("failed to repair worktrees")?;
+        }
+        false
+    } else {
+        // Create worktree at the original commit — the branch still points
+        // here because archival used detached commits.
+        let rx = main_repo.update(cx, |repo, _cx| {
+            repo.create_worktree_detached(worktree_path.clone(), row.original_commit_hash.clone())
+        });
+        rx.await
+            .map_err(|_| anyhow!("worktree creation was canceled"))?
+            .context("failed to create worktree")?;
+        true
+    };
+
+    let (wt_repo, _temp_wt_project) = match find_or_create_repository(worktree_path, cx).await {
+        Ok(result) => result,
+        Err(error) => {
+            remove_new_worktree_on_error(created_new_worktree, &main_repo, worktree_path, cx).await;
+            return Err(error);
+        }
+    };
+
+    // Switch to the branch. Since the branch was never moved during
+    // archival (WIP commits are detached), it still points at
+    // original_commit_hash, so this is essentially a no-op for HEAD.
+    if let Some(branch_name) = &row.branch_name {
+        let rx = wt_repo.update(cx, |repo, _cx| repo.change_branch(branch_name.clone()));
+        if let Err(checkout_error) = rx.await.map_err(|e| anyhow!("{e}")).and_then(|r| r) {
+            log::debug!(
+                "change_branch('{}') failed: {checkout_error:#}, trying create_branch",
+                branch_name
+            );
+            let rx = wt_repo.update(cx, |repo, _cx| {
+                repo.create_branch(branch_name.clone(), None)
+            });
+            if let Ok(Err(error)) | Err(error) = rx.await.map_err(|e| anyhow!("{e}")) {
+                log::warn!(
+                    "Could not create branch '{}': {error} — \
+                     restored worktree will be in detached HEAD state.",
+                    branch_name
+                );
+            }
+        }
+    }
+
+    // Restore the staged/unstaged state from the WIP commit trees.
+    // read-tree --reset -u applies the unstaged tree (including deletions)
+    // to the working directory, then a bare read-tree sets the index to
+    // the staged tree without touching the working directory.
+    let restore_rx = wt_repo.update(cx, |repo, _cx| {
+        repo.restore_archive_checkpoint(
+            row.staged_commit_hash.clone(),
+            row.unstaged_commit_hash.clone(),
+        )
+    });
+    if let Err(error) = restore_rx
+        .await
+        .map_err(|_| anyhow!("restore_archive_checkpoint canceled"))
+        .and_then(|r| r)
+    {
+        remove_new_worktree_on_error(created_new_worktree, &main_repo, worktree_path, cx).await;
+        return Err(error.context("failed to restore archive checkpoint"));
+    }
+
+    Ok(worktree_path.clone())
+}
+
+async fn remove_new_worktree_on_error(
+    created_new_worktree: bool,
+    main_repo: &Entity<Repository>,
+    worktree_path: &PathBuf,
+    cx: &mut AsyncApp,
+) {
+    if created_new_worktree {
+        let rx = main_repo.update(cx, |repo, _cx| {
+            repo.remove_worktree(worktree_path.clone(), true)
+        });
+        rx.await.ok().and_then(|r| r.log_err());
+    }
+}
+
+/// Deletes the git ref and DB records for a single archived worktree.
+/// Used when an archived worktree is no longer referenced by any thread.
+pub async fn cleanup_archived_worktree_record(row: &ArchivedGitWorktree, cx: &mut AsyncApp) {
+    // Delete the git ref from the main repo
+    if let Ok((main_repo, _temp_project)) = find_or_create_repository(&row.main_repo_path, cx).await
+    {
+        let ref_name = archived_worktree_ref_name(row.id);
+        let rx = main_repo.update(cx, |repo, _cx| repo.delete_ref(ref_name));
+        match rx.await {
+            Ok(Ok(())) => {}
+            Ok(Err(error)) => log::warn!("Failed to delete archive ref: {error}"),
+            Err(_) => log::warn!("Archive ref deletion was canceled"),
+        }
+        // Keep _temp_project alive until after the await so the headless project isn't dropped mid-operation
+        drop(_temp_project);
+    }
+
+    // Delete the DB records
+    let store = cx.update(|cx| ThreadMetadataStore::global(cx));
+    store
+        .read_with(cx, |store, cx| store.delete_archived_worktree(row.id, cx))
+        .await
+        .log_err();
+}
+
+/// Cleans up all archived worktree data associated with a thread being deleted.
+///
+/// This unlinks the thread from all its archived worktrees and, for any
+/// archived worktree that is no longer referenced by any other thread,
+/// deletes the git ref and DB records.
+pub async fn cleanup_thread_archived_worktrees(session_id: &acp::SessionId, cx: &mut AsyncApp) {
+    let store = cx.update(|cx| ThreadMetadataStore::global(cx));
+
+    let archived_worktrees = store
+        .read_with(cx, |store, cx| {
+            store.get_archived_worktrees_for_thread(session_id.0.to_string(), cx)
+        })
+        .await;
+    let archived_worktrees = match archived_worktrees {
+        Ok(rows) => rows,
+        Err(error) => {
+            log::error!(
+                "Failed to fetch archived worktrees for thread {}: {error:#}",
+                session_id.0
+            );
+            return;
+        }
+    };
+
+    if archived_worktrees.is_empty() {
+        return;
+    }
+
+    if let Err(error) = store
+        .read_with(cx, |store, cx| {
+            store.unlink_thread_from_all_archived_worktrees(session_id.0.to_string(), cx)
+        })
+        .await
+    {
+        log::error!(
+            "Failed to unlink thread {} from archived worktrees: {error:#}",
+            session_id.0
+        );
+        return;
+    }
+
+    for row in &archived_worktrees {
+        let still_referenced = store
+            .read_with(cx, |store, cx| {
+                store.is_archived_worktree_referenced(row.id, cx)
+            })
+            .await;
+        match still_referenced {
+            Ok(true) => {}
+            Ok(false) => {
+                cleanup_archived_worktree_record(row, cx).await;
+            }
+            Err(error) => {
+                log::error!(
+                    "Failed to check if archived worktree {} is still referenced: {error:#}",
+                    row.id
+                );
+            }
+        }
+    }
+}
+
+/// Collects every `Workspace` entity across all open `MultiWorkspace` windows.
+pub fn all_open_workspaces(cx: &App) -> Vec<Entity<Workspace>> {
+    cx.windows()
+        .into_iter()
+        .filter_map(|window| window.downcast::<MultiWorkspace>())
+        .flat_map(|multi_workspace| {
+            multi_workspace
+                .read(cx)
+                .map(|multi_workspace| multi_workspace.workspaces().cloned().collect::<Vec<_>>())
+                .unwrap_or_default()
+        })
+        .collect()
+}
+
+fn current_app_state(cx: &mut AsyncApp) -> Option<Arc<AppState>> {
+    cx.update(|cx| {
+        all_open_workspaces(cx)
+            .into_iter()
+            .next()
+            .map(|workspace| workspace.read(cx).app_state().clone())
+    })
+}

crates/agent_ui/src/threads_archive_view.rs 🔗

@@ -26,7 +26,7 @@ use picker::{
 use project::{AgentId, AgentServerStore};
 use settings::Settings as _;
 use theme::ActiveTheme;
-use ui::ThreadItem;
+use ui::{AgentThreadStatus, ThreadItem};
 use ui::{
     Divider, KeyBinding, ListItem, ListItemSpacing, ListSubHeader, Tooltip, WithScrollbar,
     prelude::*, utils::platform_title_bar_height,
@@ -113,6 +113,7 @@ fn fuzzy_match_positions(query: &str, text: &str) -> Option<Vec<usize>> {
 pub enum ThreadsArchiveViewEvent {
     Close,
     Unarchive { thread: ThreadMetadata },
+    CancelRestore { session_id: acp::SessionId },
 }
 
 impl EventEmitter<ThreadsArchiveViewEvent> for ThreadsArchiveView {}
@@ -131,6 +132,7 @@ pub struct ThreadsArchiveView {
     workspace: WeakEntity<Workspace>,
     agent_connection_store: WeakEntity<AgentConnectionStore>,
     agent_server_store: WeakEntity<AgentServerStore>,
+    restoring: HashSet<acp::SessionId>,
 }
 
 impl ThreadsArchiveView {
@@ -199,6 +201,7 @@ impl ThreadsArchiveView {
             workspace,
             agent_connection_store,
             agent_server_store,
+            restoring: HashSet::default(),
         };
 
         this.update_items(cx);
@@ -213,6 +216,16 @@ impl ThreadsArchiveView {
         self.selection = None;
     }
 
+    pub fn mark_restoring(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
+        self.restoring.insert(session_id.clone());
+        cx.notify();
+    }
+
+    pub fn clear_restoring(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
+        self.restoring.remove(session_id);
+        cx.notify();
+    }
+
     pub fn focus_filter_editor(&self, window: &mut Window, cx: &mut App) {
         let handle = self.filter_editor.read(cx).focus_handle(cx);
         handle.focus(window, cx);
@@ -323,11 +336,16 @@ impl ThreadsArchiveView {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        if thread.folder_paths.is_empty() {
+        if self.restoring.contains(&thread.session_id) {
+            return;
+        }
+
+        if thread.folder_paths().is_empty() {
             self.show_project_picker_for_thread(thread, window, cx);
             return;
         }
 
+        self.mark_restoring(&thread.session_id, cx);
         self.selection = None;
         self.reset_filter_editor_text(window, cx);
         cx.emit(ThreadsArchiveViewEvent::Unarchive { thread });
@@ -510,14 +528,16 @@ impl ThreadsArchiveView {
                     IconName::Sparkle
                 };
 
-                ThreadItem::new(id, thread.title.clone())
+                let is_restoring = self.restoring.contains(&thread.session_id);
+
+                let base = ThreadItem::new(id, thread.title.clone())
                     .icon(icon)
                     .when_some(icon_from_external_svg, |this, svg| {
                         this.custom_icon_from_external_svg(svg)
                     })
                     .timestamp(timestamp)
                     .highlight_positions(highlight_positions.clone())
-                    .project_paths(thread.folder_paths.paths_owned())
+                    .project_paths(thread.folder_paths().paths_owned())
                     .focused(is_focused)
                     .hovered(is_hovered)
                     .on_hover(cx.listener(move |this, is_hovered, _window, cx| {
@@ -527,8 +547,31 @@ impl ThreadsArchiveView {
                             this.hovered_index = None;
                         }
                         cx.notify();
-                    }))
-                    .action_slot(
+                    }));
+
+                if is_restoring {
+                    base.status(AgentThreadStatus::Running)
+                        .action_slot(
+                            IconButton::new("cancel-restore", IconName::Close)
+                                .style(ButtonStyle::Filled)
+                                .icon_size(IconSize::Small)
+                                .icon_color(Color::Muted)
+                                .tooltip(Tooltip::text("Cancel Restore"))
+                                .on_click({
+                                    let session_id = thread.session_id.clone();
+                                    cx.listener(move |this, _, _, cx| {
+                                        this.clear_restoring(&session_id, cx);
+                                        cx.emit(ThreadsArchiveViewEvent::CancelRestore {
+                                            session_id: session_id.clone(),
+                                        });
+                                        cx.stop_propagation();
+                                    })
+                                }),
+                        )
+                        .tooltip(Tooltip::text("Restoring\u{2026}"))
+                        .into_any_element()
+                } else {
+                    base.action_slot(
                         IconButton::new("delete-thread", IconName::Trash)
                             .style(ButtonStyle::Filled)
                             .icon_size(IconSize::Small)
@@ -561,6 +604,7 @@ impl ThreadsArchiveView {
                         })
                     })
                     .into_any_element()
+                }
             }
         }
     }
@@ -603,6 +647,9 @@ impl ThreadsArchiveView {
                 .wait_for_connection()
         });
         cx.spawn(async move |_this, cx| {
+            crate::thread_worktree_archive::cleanup_thread_archived_worktrees(&session_id, cx)
+                .await;
+
             let state = task.await?;
             let task = cx.update(|cx| {
                 if let Some(list) = state.connection.session_list(cx) {
@@ -883,7 +930,8 @@ impl ProjectPickerDelegate {
         window: &mut Window,
         cx: &mut Context<Picker<Self>>,
     ) {
-        self.thread.folder_paths = paths.clone();
+        self.thread.worktree_paths =
+            super::thread_metadata_store::ThreadWorktreePaths::from_folder_paths(&paths);
         ThreadMetadataStore::global(cx).update(cx, |store, cx| {
             store.update_working_directories(&self.thread.session_id, paths, cx);
         });

crates/collab/src/db.rs 🔗

@@ -532,6 +532,7 @@ impl RejoinedProject {
                     root_name: worktree.root_name.clone(),
                     visible: worktree.visible,
                     abs_path: worktree.abs_path.clone(),
+                    root_repo_common_dir: None,
                 })
                 .collect(),
             collaborators: self

crates/collab/src/rpc.rs 🔗

@@ -1894,6 +1894,7 @@ async fn join_project(
             root_name: worktree.root_name.clone(),
             visible: worktree.visible,
             abs_path: worktree.abs_path.clone(),
+            root_repo_common_dir: None,
         })
         .collect::<Vec<_>>();
 

crates/fs/src/fake_git_repo.rs 🔗

@@ -1179,6 +1179,39 @@ impl GitRepository for FakeGitRepository {
         .boxed()
     }
 
+    fn create_archive_checkpoint(&self) -> BoxFuture<'_, Result<(String, String)>> {
+        let executor = self.executor.clone();
+        let fs = self.fs.clone();
+        let checkpoints = self.checkpoints.clone();
+        let repository_dir_path = self.repository_dir_path.parent().unwrap().to_path_buf();
+        async move {
+            executor.simulate_random_delay().await;
+            let staged_oid = git::Oid::random(&mut *executor.rng().lock());
+            let unstaged_oid = git::Oid::random(&mut *executor.rng().lock());
+            let entry = fs.entry(&repository_dir_path)?;
+            checkpoints.lock().insert(staged_oid, entry.clone());
+            checkpoints.lock().insert(unstaged_oid, entry);
+            Ok((staged_oid.to_string(), unstaged_oid.to_string()))
+        }
+        .boxed()
+    }
+
+    fn restore_archive_checkpoint(
+        &self,
+        // The fake filesystem doesn't model a separate index, so only the
+        // unstaged (full working directory) snapshot is restored.
+        _staged_sha: String,
+        unstaged_sha: String,
+    ) -> BoxFuture<'_, Result<()>> {
+        match unstaged_sha.parse() {
+            Ok(commit_sha) => self.restore_checkpoint(GitRepositoryCheckpoint { commit_sha }),
+            Err(error) => async move {
+                Err(anyhow::anyhow!(error).context("failed to parse unstaged SHA as Oid"))
+            }
+            .boxed(),
+        }
+    }
+
     fn compare_checkpoints(
         &self,
         left: GitRepositoryCheckpoint,

crates/git/src/repository.rs 🔗

@@ -916,6 +916,20 @@ pub trait GitRepository: Send + Sync {
     /// Resets to a previously-created checkpoint.
     fn restore_checkpoint(&self, checkpoint: GitRepositoryCheckpoint) -> BoxFuture<'_, Result<()>>;
 
+    /// Creates two detached commits capturing the current staged and unstaged
+    /// state without moving any branch. Returns (staged_sha, unstaged_sha).
+    fn create_archive_checkpoint(&self) -> BoxFuture<'_, Result<(String, String)>>;
+
+    /// Restores the working directory and index from archive checkpoint SHAs.
+    /// Assumes HEAD is already at the correct commit (original_commit_hash).
+    /// Restores the index to match staged_sha's tree, and the working
+    /// directory to match unstaged_sha's tree.
+    fn restore_archive_checkpoint(
+        &self,
+        staged_sha: String,
+        unstaged_sha: String,
+    ) -> BoxFuture<'_, Result<()>>;
+
     /// Compares two checkpoints, returning true if they are equal
     fn compare_checkpoints(
         &self,
@@ -2607,6 +2621,90 @@ impl GitRepository for RealGitRepository {
             .boxed()
     }
 
+    fn create_archive_checkpoint(&self) -> BoxFuture<'_, Result<(String, String)>> {
+        let git_binary = self.git_binary();
+        self.executor
+            .spawn(async move {
+                let mut git = git_binary?.envs(checkpoint_author_envs());
+
+                let head_sha = git
+                    .run(&["rev-parse", "HEAD"])
+                    .await
+                    .context("failed to read HEAD")?;
+
+                // Capture the staged state: write-tree reads the current index
+                let staged_tree = git
+                    .run(&["write-tree"])
+                    .await
+                    .context("failed to write staged tree")?;
+                let staged_sha = git
+                    .run(&[
+                        "commit-tree",
+                        &staged_tree,
+                        "-p",
+                        &head_sha,
+                        "-m",
+                        "WIP staged",
+                    ])
+                    .await
+                    .context("failed to create staged commit")?;
+
+                // Capture the full state (staged + unstaged + untracked) using
+                // a temporary index so we don't disturb the real one.
+                let unstaged_sha = git
+                    .with_temp_index(async |git| {
+                        git.run(&["add", "--all"]).await?;
+                        let full_tree = git.run(&["write-tree"]).await?;
+                        let sha = git
+                            .run(&[
+                                "commit-tree",
+                                &full_tree,
+                                "-p",
+                                &staged_sha,
+                                "-m",
+                                "WIP unstaged",
+                            ])
+                            .await?;
+                        Ok(sha)
+                    })
+                    .await
+                    .context("failed to create unstaged commit")?;
+
+                Ok((staged_sha, unstaged_sha))
+            })
+            .boxed()
+    }
+
+    fn restore_archive_checkpoint(
+        &self,
+        staged_sha: String,
+        unstaged_sha: String,
+    ) -> BoxFuture<'_, Result<()>> {
+        let git_binary = self.git_binary();
+        self.executor
+            .spawn(async move {
+                let git = git_binary?;
+
+                // First, set the index AND working tree to match the unstaged
+                // tree. --reset -u computes a tree-level diff between the
+                // current index and unstaged_sha's tree and applies additions,
+                // modifications, and deletions to the working directory.
+                git.run(&["read-tree", "--reset", "-u", &unstaged_sha])
+                    .await
+                    .context("failed to restore working directory from unstaged commit")?;
+
+                // Then replace just the index with the staged tree. Without -u
+                // this doesn't touch the working directory, so the result is:
+                // working tree = unstaged state, index = staged state.
+                git.run(&["read-tree", &staged_sha])
+                    .await
+                    .context("failed to restore index from staged commit")?;
+
+                Ok(())
+            })
+            .boxed()
+    }
+
     fn compare_checkpoints(
         &self,
         left: GitRepositoryCheckpoint,

crates/language_model_core/src/request.rs 🔗

@@ -333,7 +333,9 @@ pub struct LanguageModelRequest {
     pub speed: Option<Speed>,
 }
 
-#[derive(Clone, Copy, Default, Debug, Serialize, Deserialize, PartialEq, Eq)]
+#[derive(
+    Clone, Copy, Default, Debug, Serialize, Deserialize, PartialEq, Eq, schemars::JsonSchema,
+)]
 #[serde(rename_all = "snake_case")]
 pub enum Speed {
     #[default]

crates/project/src/agent_server_store.rs 🔗

@@ -1,4 +1,3 @@
-use remote::Interactive;
 use std::{
     any::Any,
     path::{Path, PathBuf},
@@ -116,9 +115,9 @@ pub enum ExternalAgentSource {
 
 pub trait ExternalAgentServer {
     fn get_command(
-        &mut self,
+        &self,
+        extra_args: Vec<String>,
         extra_env: HashMap<String, String>,
-        new_version_available_tx: Option<watch::Sender<Option<String>>>,
         cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>>;
 
@@ -800,11 +799,10 @@ impl AgentServerStore {
                 if no_browser {
                     extra_env.insert("NO_BROWSER".to_owned(), "1".to_owned());
                 }
-                anyhow::Ok(agent.get_command(
-                    extra_env,
-                    new_version_available_tx,
-                    &mut cx.to_async(),
-                ))
+                if let Some(new_version_available_tx) = new_version_available_tx {
+                    agent.set_new_version_available_tx(new_version_available_tx);
+                }
+                anyhow::Ok(agent.get_command(vec![], extra_env, &mut cx.to_async()))
             })?
             .await?;
         Ok(proto::AgentServerCommand {
@@ -986,16 +984,15 @@ impl ExternalAgentServer for RemoteExternalAgentServer {
     }
 
     fn get_command(
-        &mut self,
+        &self,
+        extra_args: Vec<String>,
         extra_env: HashMap<String, String>,
-        new_version_available_tx: Option<watch::Sender<Option<String>>>,
         cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
         let project_id = self.project_id;
         let name = self.name.to_string();
         let upstream_client = self.upstream_client.downgrade();
         let worktree_store = self.worktree_store.clone();
-        self.new_version_available_tx = new_version_available_tx;
         cx.spawn(async move |cx| {
             let root_dir = worktree_store.read_with(cx, |worktree_store, cx| {
                 crate::Project::default_visible_worktree_paths(worktree_store, cx)
@@ -1015,22 +1012,13 @@ impl ExternalAgentServer for RemoteExternalAgentServer {
                         })
                 })?
                 .await?;
-            let root_dir = response.root_dir;
+            response.args.extend(extra_args);
             response.env.extend(extra_env);
-            let command = upstream_client.update(cx, |client, _| {
-                client.build_command_with_options(
-                    Some(response.path),
-                    &response.args,
-                    &response.env.into_iter().collect(),
-                    Some(root_dir.clone()),
-                    None,
-                    Interactive::No,
-                )
-            })??;
+
             Ok(AgentServerCommand {
-                path: command.program.into(),
-                args: command.args,
-                env: Some(command.env),
+                path: response.path.into(),
+                args: response.args,
+                env: Some(response.env.into_iter().collect()),
             })
         })
     }
@@ -1162,12 +1150,11 @@ impl ExternalAgentServer for LocalExtensionArchiveAgent {
     }
 
     fn get_command(
-        &mut self,
+        &self,
+        extra_args: Vec<String>,
         extra_env: HashMap<String, String>,
-        new_version_available_tx: Option<watch::Sender<Option<String>>>,
         cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
-        self.new_version_available_tx = new_version_available_tx;
         let fs = self.fs.clone();
         let http_client = self.http_client.clone();
         let node_runtime = self.node_runtime.clone();
@@ -1309,9 +1296,12 @@ impl ExternalAgentServer for LocalExtensionArchiveAgent {
                 }
             };
 
+            let mut args = target_config.args.clone();
+            args.extend(extra_args);
+
             let command = AgentServerCommand {
                 path: cmd_path,
-                args: target_config.args.clone(),
+                args,
                 env: Some(env),
             };
 
@@ -1354,12 +1344,11 @@ impl ExternalAgentServer for LocalRegistryArchiveAgent {
     }
 
     fn get_command(
-        &mut self,
+        &self,
+        extra_args: Vec<String>,
         extra_env: HashMap<String, String>,
-        new_version_available_tx: Option<watch::Sender<Option<String>>>,
         cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
-        self.new_version_available_tx = new_version_available_tx;
         let fs = self.fs.clone();
         let http_client = self.http_client.clone();
         let node_runtime = self.node_runtime.clone();
@@ -1486,9 +1475,12 @@ impl ExternalAgentServer for LocalRegistryArchiveAgent {
                 }
             };
 
+            let mut args = target_config.args.clone();
+            args.extend(extra_args);
+
             let command = AgentServerCommand {
                 path: cmd_path,
-                args: target_config.args.clone(),
+                args,
                 env: Some(env),
             };
 
@@ -1530,12 +1522,11 @@ impl ExternalAgentServer for LocalRegistryNpxAgent {
     }
 
     fn get_command(
-        &mut self,
+        &self,
+        extra_args: Vec<String>,
         extra_env: HashMap<String, String>,
-        new_version_available_tx: Option<watch::Sender<Option<String>>>,
         cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
-        self.new_version_available_tx = new_version_available_tx;
         let node_runtime = self.node_runtime.clone();
         let project_environment = self.project_environment.downgrade();
         let package = self.package.clone();
@@ -1566,9 +1557,12 @@ impl ExternalAgentServer for LocalRegistryNpxAgent {
             env.extend(extra_env);
             env.extend(settings_env);
 
+            let mut args = npm_command.args;
+            args.extend(extra_args);
+
             let command = AgentServerCommand {
                 path: npm_command.path,
-                args: npm_command.args,
+                args,
                 env: Some(env),
             };
 
@@ -1592,9 +1586,9 @@ struct LocalCustomAgent {
 
 impl ExternalAgentServer for LocalCustomAgent {
     fn get_command(
-        &mut self,
+        &self,
+        extra_args: Vec<String>,
         extra_env: HashMap<String, String>,
-        _new_version_available_tx: Option<watch::Sender<Option<String>>>,
         cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
         let mut command = self.command.clone();
@@ -1609,6 +1603,7 @@ impl ExternalAgentServer for LocalCustomAgent {
             env.extend(command.env.unwrap_or_default());
             env.extend(extra_env);
             command.env = Some(env);
+            command.args.extend(extra_args);
             Ok(command)
         })
     }

crates/project/src/git_store.rs 🔗

@@ -6054,22 +6054,20 @@ impl Repository {
                 RepositoryState::Remote(RemoteRepositoryState { project_id, client }) => {
                     let (name, commit, use_existing_branch) = match target {
                         CreateWorktreeTarget::ExistingBranch { branch_name } => {
-                            (branch_name, None, true)
+                            (Some(branch_name), None, true)
                         }
                         CreateWorktreeTarget::NewBranch {
                             branch_name,
-                            base_sha: start_point,
-                        } => (branch_name, start_point, false),
-                        CreateWorktreeTarget::Detached {
-                            base_sha: start_point,
-                        } => (String::new(), start_point, false),
+                            base_sha,
+                        } => (Some(branch_name), base_sha, false),
+                        CreateWorktreeTarget::Detached { base_sha } => (None, base_sha, false),
                     };
 
                     client
                         .request(proto::GitCreateWorktree {
                             project_id: project_id.0,
                             repository_id: id.to_proto(),
-                            name,
+                            name: name.unwrap_or_default(),
                             directory: path.to_string_lossy().to_string(),
                             commit,
                             use_existing_branch,
@@ -6159,15 +6157,37 @@ impl Repository {
         })
     }
 
-    pub fn commit_exists(&mut self, sha: String) -> oneshot::Receiver<Result<bool>> {
+    pub fn create_archive_checkpoint(&mut self) -> oneshot::Receiver<Result<(String, String)>> {
         self.send_job(None, move |repo, _cx| async move {
             match repo {
                 RepositoryState::Local(LocalRepositoryState { backend, .. }) => {
-                    let results = backend.revparse_batch(vec![sha]).await?;
-                    Ok(results.into_iter().next().flatten().is_some())
+                    backend.create_archive_checkpoint().await
                 }
                 RepositoryState::Remote(_) => {
-                    anyhow::bail!("commit_exists is not supported for remote repositories")
+                    anyhow::bail!(
+                        "create_archive_checkpoint is not supported for remote repositories"
+                    )
+                }
+            }
+        })
+    }
+
+    pub fn restore_archive_checkpoint(
+        &mut self,
+        staged_sha: String,
+        unstaged_sha: String,
+    ) -> oneshot::Receiver<Result<()>> {
+        self.send_job(None, move |repo, _cx| async move {
+            match repo {
+                RepositoryState::Local(LocalRepositoryState { backend, .. }) => {
+                    backend
+                        .restore_archive_checkpoint(staged_sha, unstaged_sha)
+                        .await
+                }
+                RepositoryState::Remote(_) => {
+                    anyhow::bail!(
+                        "restore_archive_checkpoint is not supported for remote repositories"
+                    )
                 }
             }
         })

crates/project/src/lsp_store.rs 🔗

@@ -4430,7 +4430,8 @@ impl LspStore {
             WorktreeStoreEvent::WorktreeReleased(..)
             | WorktreeStoreEvent::WorktreeOrderChanged
             | WorktreeStoreEvent::WorktreeUpdatedGitRepositories(..)
-            | WorktreeStoreEvent::WorktreeDeletedEntry(..) => {}
+            | WorktreeStoreEvent::WorktreeDeletedEntry(..)
+            | WorktreeStoreEvent::WorktreeUpdatedRootRepoCommonDir(..) => {}
         }
     }
 

crates/project/src/project.rs 🔗

@@ -360,6 +360,7 @@ pub enum Event {
     WorktreeOrderChanged,
     WorktreeRemoved(WorktreeId),
     WorktreeUpdatedEntries(WorktreeId, UpdatedEntriesSet),
+    WorktreeUpdatedRootRepoCommonDir(WorktreeId),
     DiskBasedDiagnosticsStarted {
         language_server_id: LanguageServerId,
     },
@@ -3681,6 +3682,9 @@ impl Project {
             }
             // Listen to the GitStore instead.
             WorktreeStoreEvent::WorktreeUpdatedGitRepositories(_, _) => {}
+            WorktreeStoreEvent::WorktreeUpdatedRootRepoCommonDir(worktree_id) => {
+                cx.emit(Event::WorktreeUpdatedRootRepoCommonDir(*worktree_id));
+            }
         }
     }
 
@@ -4758,6 +4762,44 @@ impl Project {
         })
     }
 
+    /// Returns a task that resolves when the given worktree's `Entity` is
+    /// fully dropped (all strong references released), not merely when
+    /// `remove_worktree` is called. `remove_worktree` drops the store's
+    /// reference and emits `WorktreeRemoved`, but other code may still
+    /// hold a strong handle — the worktree isn't safe to delete from
+    /// disk until every handle is gone.
+    ///
+    /// We use `observe_release` on the specific entity rather than
+    /// listening for `WorktreeReleased` events because it's simpler at
+    /// the call site (one awaitable task, no subscription / channel /
+    /// ID filtering).
+    pub fn wait_for_worktree_release(
+        &mut self,
+        worktree_id: WorktreeId,
+        cx: &mut Context<Self>,
+    ) -> Task<Result<()>> {
+        let Some(worktree) = self.worktree_for_id(worktree_id, cx) else {
+            return Task::ready(Ok(()));
+        };
+
+        let (released_tx, released_rx) = futures::channel::oneshot::channel();
+        let released_tx = std::sync::Arc::new(Mutex::new(Some(released_tx)));
+        let release_subscription =
+            cx.observe_release(&worktree, move |_project, _released_worktree, _cx| {
+                if let Some(released_tx) = released_tx.lock().take() {
+                    let _ = released_tx.send(());
+                }
+            });
+
+        cx.spawn(async move |_project, _cx| {
+            let _release_subscription = release_subscription;
+            released_rx
+                .await
+                .map_err(|_| anyhow!("worktree release observer dropped before release"))?;
+            Ok(())
+        })
+    }
+
     pub fn remove_worktree(&mut self, id_to_remove: WorktreeId, cx: &mut Context<Self>) {
         self.worktree_store.update(cx, |worktree_store, cx| {
             worktree_store.remove_worktree(id_to_remove, cx);
@@ -6055,6 +6097,7 @@ impl Project {
 /// workspaces by main repos.
 #[derive(PartialEq, Eq, Hash, Clone, Debug)]
 pub struct ProjectGroupKey {
+    /// The paths of the main worktrees for this project group.
     paths: PathList,
     host: Option<RemoteConnectionOptions>,
 }
@@ -6067,30 +6110,48 @@ impl ProjectGroupKey {
         Self { paths, host }
     }
 
-    pub fn display_name(&self) -> SharedString {
+    pub fn path_list(&self) -> &PathList {
+        &self.paths
+    }
+
+    pub fn display_name(
+        &self,
+        path_detail_map: &std::collections::HashMap<PathBuf, usize>,
+    ) -> SharedString {
         let mut names = Vec::with_capacity(self.paths.paths().len());
         for abs_path in self.paths.paths() {
-            if let Some(name) = abs_path.file_name() {
-                names.push(name.to_string_lossy().to_string());
+            let detail = path_detail_map.get(abs_path).copied().unwrap_or(0);
+            let suffix = path_suffix(abs_path, detail);
+            if !suffix.is_empty() {
+                names.push(suffix);
             }
         }
         if names.is_empty() {
-            // TODO: Can we do something better in this case?
             "Empty Workspace".into()
         } else {
             names.join(", ").into()
         }
     }
 
-    pub fn path_list(&self) -> &PathList {
-        &self.paths
-    }
-
     pub fn host(&self) -> Option<RemoteConnectionOptions> {
         self.host.clone()
     }
 }
 
+pub fn path_suffix(path: &Path, detail: usize) -> String {
+    let mut components: Vec<_> = path
+        .components()
+        .rev()
+        .filter_map(|component| match component {
+            std::path::Component::Normal(s) => Some(s.to_string_lossy()),
+            _ => None,
+        })
+        .take(detail + 1)
+        .collect();
+    components.reverse();
+    components.join("/")
+}
+
 pub struct PathMatchCandidateSet {
     pub snapshot: Snapshot,
     pub include_ignored: bool,

crates/project/src/worktree_store.rs 🔗

@@ -91,6 +91,7 @@ pub enum WorktreeStoreEvent {
     WorktreeUpdatedEntries(WorktreeId, UpdatedEntriesSet),
     WorktreeUpdatedGitRepositories(WorktreeId, UpdatedGitRepositoriesSet),
     WorktreeDeletedEntry(WorktreeId, ProjectEntryId),
+    WorktreeUpdatedRootRepoCommonDir(WorktreeId),
 }
 
 impl EventEmitter<WorktreeStoreEvent> for WorktreeStore {}
@@ -712,6 +713,7 @@ impl WorktreeStore {
                         root_name,
                         visible,
                         abs_path: response.canonicalized_path,
+                        root_repo_common_dir: response.root_repo_common_dir,
                     },
                     client,
                     path_style,
@@ -812,7 +814,11 @@ impl WorktreeStore {
                     // The worktree root itself has been deleted (for single-file worktrees)
                     // The worktree will be removed via the observe_release callback
                 }
-                worktree::Event::UpdatedRootRepoCommonDir => {}
+                worktree::Event::UpdatedRootRepoCommonDir => {
+                    cx.emit(WorktreeStoreEvent::WorktreeUpdatedRootRepoCommonDir(
+                        worktree_id,
+                    ));
+                }
             }
         })
         .detach();
@@ -1049,6 +1055,9 @@ impl WorktreeStore {
                     root_name: worktree.root_name_str().to_owned(),
                     visible: worktree.is_visible(),
                     abs_path: worktree.abs_path().to_string_lossy().into_owned(),
+                    root_repo_common_dir: worktree
+                        .root_repo_common_dir()
+                        .map(|p| p.to_string_lossy().into_owned()),
                 }
             })
             .collect()

crates/project/tests/integration/ext_agent_tests.rs 🔗

@@ -8,9 +8,9 @@ struct NoopExternalAgent;
 
 impl ExternalAgentServer for NoopExternalAgent {
     fn get_command(
-        &mut self,
+        &self,
+        _extra_args: Vec<String>,
         _extra_env: HashMap<String, String>,
-        _new_version_available_tx: Option<watch::Sender<Option<String>>>,
         _cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
         Task::ready(Ok(AgentServerCommand {

crates/project/tests/integration/extension_agent_tests.rs 🔗

@@ -24,9 +24,9 @@ struct NoopExternalAgent;
 
 impl ExternalAgentServer for NoopExternalAgent {
     fn get_command(
-        &mut self,
+        &self,
+        _extra_args: Vec<String>,
         _extra_env: HashMap<String, String>,
-        _new_version_available_tx: Option<watch::Sender<Option<String>>>,
         _cx: &mut AsyncApp,
     ) -> Task<Result<AgentServerCommand>> {
         Task::ready(Ok(AgentServerCommand {

crates/proto/proto/worktree.proto 🔗

@@ -40,6 +40,7 @@ message AddWorktree {
 message AddWorktreeResponse {
   uint64 worktree_id = 1;
   string canonicalized_path = 2;
+  optional string root_repo_common_dir = 3;
 }
 
 message RemoveWorktree {
@@ -62,6 +63,7 @@ message WorktreeMetadata {
   string root_name = 2;
   bool visible = 3;
   string abs_path = 4;
+  optional string root_repo_common_dir = 5;
 }
 
 message ProjectPath {

crates/recent_projects/src/recent_projects.rs 🔗

@@ -99,27 +99,40 @@ pub async fn get_recent_projects(
         .await
         .unwrap_or_default();
 
-    let entries: Vec<RecentProjectEntry> = workspaces
+    let filtered: Vec<_> = workspaces
         .into_iter()
         .filter(|(id, _, _, _)| Some(*id) != current_workspace_id)
         .filter(|(_, location, _, _)| matches!(location, SerializedWorkspaceLocation::Local))
+        .collect();
+
+    let mut all_paths: Vec<PathBuf> = filtered
+        .iter()
+        .flat_map(|(_, _, path_list, _)| path_list.paths().iter().cloned())
+        .collect();
+    all_paths.sort();
+    all_paths.dedup();
+    let path_details =
+        util::disambiguate::compute_disambiguation_details(&all_paths, |path, detail| {
+            project::path_suffix(path, detail)
+        });
+    let path_detail_map: std::collections::HashMap<PathBuf, usize> =
+        all_paths.into_iter().zip(path_details).collect();
+
+    let entries: Vec<RecentProjectEntry> = filtered
+        .into_iter()
         .map(|(workspace_id, _, path_list, timestamp)| {
             let paths: Vec<PathBuf> = path_list.paths().to_vec();
             let ordered_paths: Vec<&PathBuf> = path_list.ordered_paths().collect();
 
-            let name = if ordered_paths.len() == 1 {
-                ordered_paths[0]
-                    .file_name()
-                    .map(|n| n.to_string_lossy().to_string())
-                    .unwrap_or_else(|| ordered_paths[0].to_string_lossy().to_string())
-            } else {
-                ordered_paths
-                    .iter()
-                    .filter_map(|p| p.file_name())
-                    .map(|n| n.to_string_lossy().to_string())
-                    .collect::<Vec<_>>()
-                    .join(", ")
-            };
+            let name = ordered_paths
+                .iter()
+                .map(|p| {
+                    let detail = path_detail_map.get(*p).copied().unwrap_or(0);
+                    project::path_suffix(p, detail)
+                })
+                .filter(|s| !s.is_empty())
+                .collect::<Vec<_>>()
+                .join(", ");
 
             let full_path = ordered_paths
                 .iter()
@@ -170,6 +183,19 @@ fn get_open_folders(workspace: &Workspace, cx: &App) -> Vec<OpenFolderEntry> {
             .map(|wt| wt.read(cx).id())
     };
 
+    let mut all_paths: Vec<PathBuf> = visible_worktrees
+        .iter()
+        .map(|wt| wt.read(cx).abs_path().to_path_buf())
+        .collect();
+    all_paths.sort();
+    all_paths.dedup();
+    let path_details =
+        util::disambiguate::compute_disambiguation_details(&all_paths, |path, detail| {
+            project::path_suffix(path, detail)
+        });
+    let path_detail_map: std::collections::HashMap<PathBuf, usize> =
+        all_paths.into_iter().zip(path_details).collect();
+
     let git_store = project.git_store().read(cx);
     let repositories: Vec<_> = git_store.repositories().values().cloned().collect();
 
@@ -178,8 +204,9 @@ fn get_open_folders(workspace: &Workspace, cx: &App) -> Vec<OpenFolderEntry> {
         .map(|worktree| {
             let worktree_ref = worktree.read(cx);
             let worktree_id = worktree_ref.id();
-            let name = SharedString::from(worktree_ref.root_name().as_unix_str().to_string());
             let path = worktree_ref.abs_path().to_path_buf();
+            let detail = path_detail_map.get(&path).copied().unwrap_or(0);
+            let name = SharedString::from(project::path_suffix(&path, detail));
             let branch = get_branch_for_worktree(worktree_ref, &repositories, cx);
             let is_active = active_worktree_id == Some(worktree_id);
             OpenFolderEntry {

crates/recent_projects/src/remote_connections.rs 🔗

@@ -132,7 +132,7 @@ pub async fn open_remote_project(
     app_state: Arc<AppState>,
     open_options: workspace::OpenOptions,
     cx: &mut AsyncApp,
-) -> Result<()> {
+) -> Result<WindowHandle<MultiWorkspace>> {
     let created_new_window = open_options.requesting_window.is_none();
 
     let (existing, open_visible) = find_existing_workspace(
@@ -193,7 +193,7 @@ pub async fn open_remote_project(
                 .collect::<Vec<_>>();
             navigate_to_positions(&existing_window, items, &paths_with_positions, cx);
 
-            return Ok(());
+            return Ok(existing_window);
         }
         // If the remote connection is dead (e.g. server not running after failed reconnect),
         // fall through to establish a fresh connection instead of showing an error.
@@ -341,7 +341,7 @@ pub async fn open_remote_project(
                         .update(cx, |_, window, _| window.remove_window())
                         .ok();
                 }
-                return Ok(());
+                return Ok(window);
             }
         };
 
@@ -436,7 +436,7 @@ pub async fn open_remote_project(
             });
         })
         .ok();
-    Ok(())
+    Ok(window)
 }
 
 pub fn navigate_to_positions(

crates/recent_projects/src/remote_servers.rs 🔗

@@ -505,7 +505,7 @@ impl ProjectPicker {
                     }?;
 
                     let items = open_remote_project_with_existing_connection(
-                        connection, project, paths, app_state, window, cx,
+                        connection, project, paths, app_state, window, None, cx,
                     )
                     .await
                     .log_err();

crates/remote/src/remote.rs 🔗

@@ -9,7 +9,7 @@ pub use remote_client::OpenWslPath;
 pub use remote_client::{
     CommandTemplate, ConnectionIdentifier, ConnectionState, Interactive, RemoteArch, RemoteClient,
     RemoteClientDelegate, RemoteClientEvent, RemoteConnection, RemoteConnectionOptions, RemoteOs,
-    RemotePlatform, connect,
+    RemotePlatform, connect, has_active_connection,
 };
 pub use transport::docker::DockerConnectionOptions;
 pub use transport::ssh::{SshConnectionOptions, SshPortForwardOption};

crates/remote/src/remote_client.rs 🔗

@@ -377,6 +377,20 @@ pub async fn connect(
     .map_err(|e| e.cloned())
 }
 
+/// Returns `true` if the global [`ConnectionPool`] already has a live
+/// connection for the given options. Callers can use this to decide
+/// whether to show interactive UI (e.g., a password modal) before
+/// connecting.
+pub fn has_active_connection(opts: &RemoteConnectionOptions, cx: &App) -> bool {
+    cx.try_global::<ConnectionPool>().is_some_and(|pool| {
+        matches!(
+            pool.connections.get(opts),
+            Some(ConnectionPoolEntry::Connected(remote))
+                if remote.upgrade().is_some_and(|r| !r.has_been_killed())
+        )
+    })
+}
+
 impl RemoteClient {
     pub fn new(
         unique_identifier: ConnectionIdentifier,

crates/remote_connection/src/remote_connection.rs 🔗

@@ -19,7 +19,7 @@ use ui::{
     prelude::*,
 };
 use ui_input::{ERASED_EDITOR_FACTORY, ErasedEditor};
-use workspace::{DismissDecision, ModalView};
+use workspace::{DismissDecision, ModalView, Workspace};
 
 pub struct RemoteConnectionPrompt {
     connection_string: SharedString,
@@ -536,6 +536,159 @@ impl RemoteClientDelegate {
     }
 }
 
+/// Shows a [`RemoteConnectionModal`] on the given workspace and establishes
+/// a remote connection. This is a convenience wrapper around
+/// [`RemoteConnectionModal`] and [`connect`] suitable for use as the
+/// `connect_remote` callback in [`MultiWorkspace::find_or_create_workspace`].
+///
+/// When the global connection pool already has a live connection for the
+/// given options, the modal is skipped entirely and the connection is
+/// reused silently.
+pub fn connect_with_modal(
+    workspace: &Entity<Workspace>,
+    connection_options: RemoteConnectionOptions,
+    window: &mut Window,
+    cx: &mut App,
+) -> Task<Result<Option<Entity<RemoteClient>>>> {
+    if remote::has_active_connection(&connection_options, cx) {
+        return connect_reusing_pool(connection_options, cx);
+    }
+
+    workspace.update(cx, |workspace, cx| {
+        workspace.toggle_modal(window, cx, |window, cx| {
+            RemoteConnectionModal::new(&connection_options, Vec::new(), window, cx)
+        });
+        let Some(modal) = workspace.active_modal::<RemoteConnectionModal>(cx) else {
+            return Task::ready(Err(anyhow::anyhow!(
+                "Failed to open remote connection dialog"
+            )));
+        };
+        let prompt = modal.read(cx).prompt.clone();
+        connect(
+            ConnectionIdentifier::setup(),
+            connection_options,
+            prompt,
+            window,
+            cx,
+        )
+    })
+}
+
+/// Dismisses any active [`RemoteConnectionModal`] on the given workspace.
+///
+/// This should be called after a remote connection attempt completes
+/// (success or failure) when the modal was shown on a workspace that may
+/// outlive the connection flow — for example, when the modal is shown
+/// on a local workspace before switching to a newly-created remote
+/// workspace.
+pub fn dismiss_connection_modal(workspace: &Entity<Workspace>, cx: &mut gpui::AsyncWindowContext) {
+    workspace
+        .update_in(cx, |workspace, _window, cx| {
+            if let Some(modal) = workspace.active_modal::<RemoteConnectionModal>(cx) {
+                modal.update(cx, |modal, cx| modal.finished(cx));
+            }
+        })
+        .ok();
+}
+
+/// Creates a [`RemoteClient`] by reusing an existing connection from the
+/// global pool. No interactive UI is shown. This should only be called
+/// when [`remote::has_active_connection`] returns `true`.
+fn connect_reusing_pool(
+    connection_options: RemoteConnectionOptions,
+    cx: &mut App,
+) -> Task<Result<Option<Entity<RemoteClient>>>> {
+    let delegate: Arc<dyn remote::RemoteClientDelegate> = Arc::new(BackgroundRemoteClientDelegate);
+
+    cx.spawn(async move |cx| {
+        let connection = remote::connect(connection_options, delegate.clone(), cx).await?;
+
+        let (_cancel_guard, cancel_rx) = oneshot::channel::<()>();
+        cx.update(|cx| {
+            RemoteClient::new(
+                ConnectionIdentifier::setup(),
+                connection,
+                cancel_rx,
+                delegate,
+                cx,
+            )
+        })
+        .await
+    })
+}
+
+/// Delegate for remote connections that reuse an existing pooled
+/// connection. Password prompts are not expected (the SSH transport
+/// is already established), but server binary downloads are supported
+/// via [`AutoUpdater`].
+struct BackgroundRemoteClientDelegate;
+
+impl remote::RemoteClientDelegate for BackgroundRemoteClientDelegate {
+    fn ask_password(
+        &self,
+        prompt: String,
+        _tx: oneshot::Sender<EncryptedPassword>,
+        _cx: &mut AsyncApp,
+    ) {
+        log::warn!(
+            "Pooled remote connection unexpectedly requires a password \
+             (prompt: {prompt})"
+        );
+    }
+
+    fn set_status(&self, _status: Option<&str>, _cx: &mut AsyncApp) {}
+
+    fn download_server_binary_locally(
+        &self,
+        platform: RemotePlatform,
+        release_channel: ReleaseChannel,
+        version: Option<Version>,
+        cx: &mut AsyncApp,
+    ) -> Task<anyhow::Result<PathBuf>> {
+        cx.spawn(async move |cx| {
+            AutoUpdater::download_remote_server_release(
+                release_channel,
+                version.clone(),
+                platform.os.as_str(),
+                platform.arch.as_str(),
+                |_status, _cx| {},
+                cx,
+            )
+            .await
+            .with_context(|| {
+                format!(
+                    "Downloading remote server binary (version: {}, os: {}, arch: {})",
+                    version
+                        .as_ref()
+                        .map(|v| format!("{v}"))
+                        .unwrap_or("unknown".to_string()),
+                    platform.os,
+                    platform.arch,
+                )
+            })
+        })
+    }
+
+    fn get_download_url(
+        &self,
+        platform: RemotePlatform,
+        release_channel: ReleaseChannel,
+        version: Option<Version>,
+        cx: &mut AsyncApp,
+    ) -> Task<Result<Option<String>>> {
+        cx.spawn(async move |cx| {
+            AutoUpdater::get_remote_server_release_url(
+                release_channel,
+                version,
+                platform.os.as_str(),
+                platform.arch.as_str(),
+                cx,
+            )
+            .await
+        })
+    }
+}
+
 pub fn connect(
     unique_identifier: ConnectionIdentifier,
     connection_options: RemoteConnectionOptions,

crates/remote_server/src/headless_project.rs 🔗

@@ -523,6 +523,9 @@ impl HeadlessProject {
             proto::AddWorktreeResponse {
                 worktree_id: worktree.id().to_proto(),
                 canonicalized_path: canonicalized.to_string_lossy().into_owned(),
+                root_repo_common_dir: worktree
+                    .root_repo_common_dir()
+                    .map(|p| p.to_string_lossy().into_owned()),
             }
         });
 

crates/remote_server/src/remote_editing_tests.rs 🔗

@@ -2256,8 +2256,8 @@ async fn test_remote_external_agent_server(
                     .get_external_agent(&"foo".into())
                     .unwrap()
                     .get_command(
+                        vec![],
                         HashMap::from_iter([("OTHER_VAR".into(), "other-val".into())]),
-                        None,
                         &mut cx.to_async(),
                     )
             })
@@ -2267,8 +2267,8 @@ async fn test_remote_external_agent_server(
     assert_eq!(
         command,
         AgentServerCommand {
-            path: "mock".into(),
-            args: vec!["foo-cli".into(), "--flag".into()],
+            path: "foo-cli".into(),
+            args: vec!["--flag".into()],
             env: Some(HashMap::from_iter([
                 ("NO_BROWSER".into(), "1".into()),
                 ("VAR".into(), "val".into()),

crates/settings_content/src/agent.rs 🔗

@@ -256,6 +256,7 @@ impl AgentSettingsContent {
             model,
             enable_thinking: false,
             effort: None,
+            speed: None,
         });
     }
 
@@ -397,6 +398,7 @@ pub struct LanguageModelSelection {
     #[serde(default)]
     pub enable_thinking: bool,
     pub effort: Option<String>,
+    pub speed: Option<language_model_core::Speed>,
 }
 
 #[with_fallible_options]

crates/settings_content/src/merge_from.rs 🔗

@@ -56,6 +56,7 @@ merge_from_overwrites!(
     std::sync::Arc<str>,
     std::path::PathBuf,
     std::sync::Arc<std::path::Path>,
+    language_model_core::Speed,
 );
 
 impl<T: Clone + MergeFrom> MergeFrom for Option<T> {

crates/sidebar/Cargo.toml 🔗

@@ -27,14 +27,17 @@ editor.workspace = true
 fs.workspace = true
 git.workspace = true
 gpui.workspace = true
+log.workspace = true
 menu.workspace = true
 platform_title_bar.workspace = true
 project.workspace = true
 recent_projects.workspace = true
 remote.workspace = true
+remote_connection.workspace = true
 serde.workspace = true
 serde_json.workspace = true
 settings.workspace = true
+smol.workspace = true
 theme.workspace = true
 theme_settings.workspace = true
 ui.workspace = true
@@ -48,7 +51,11 @@ acp_thread = { workspace = true, features = ["test-support"] }
 agent = { workspace = true, features = ["test-support"] }
 agent_ui = { workspace = true, features = ["test-support"] }
 editor.workspace = true
+extension.workspace = true
+language = { workspace = true, features = ["test-support"] }
 language_model = { workspace = true, features = ["test-support"] }
+release_channel.workspace = true
+semver.workspace = true
 pretty_assertions.workspace = true
 prompt_store.workspace = true
 recent_projects = { workspace = true, features = ["test-support"] }
@@ -56,6 +63,13 @@ serde_json.workspace = true
 fs = { workspace = true, features = ["test-support"] }
 git.workspace = true
 gpui = { workspace = true, features = ["test-support"] }
+client = { workspace = true, features = ["test-support"] }
+clock = { workspace = true, features = ["test-support"] }
+http_client = { workspace = true, features = ["test-support"] }
+node_runtime = { workspace = true, features = ["test-support"] }
 project = { workspace = true, features = ["test-support"] }
+remote = { workspace = true, features = ["test-support"] }
+remote_connection = { workspace = true, features = ["test-support"] }
+remote_server = { workspace = true, features = ["test-support"] }
 settings = { workspace = true, features = ["test-support"] }
 workspace = { workspace = true, features = ["test-support"] }

crates/sidebar/src/sidebar.rs 🔗

@@ -4,7 +4,8 @@ use acp_thread::ThreadStatus;
 use action_log::DiffStats;
 use agent_client_protocol::{self as acp};
 use agent_settings::AgentSettings;
-use agent_ui::thread_metadata_store::{ThreadMetadata, ThreadMetadataStore};
+use agent_ui::thread_metadata_store::{ThreadMetadata, ThreadMetadataStore, ThreadWorktreePaths};
+use agent_ui::thread_worktree_archive;
 use agent_ui::threads_archive_view::{
     ThreadsArchiveView, ThreadsArchiveViewEvent, format_history_entry_timestamp,
 };
@@ -15,9 +16,9 @@ use agent_ui::{
 use chrono::{DateTime, Utc};
 use editor::Editor;
 use gpui::{
-    Action as _, AnyElement, App, Context, Entity, FocusHandle, Focusable, KeyContext, ListState,
-    Pixels, Render, SharedString, WeakEntity, Window, WindowHandle, linear_color_stop,
-    linear_gradient, list, prelude::*, px,
+    Action as _, AnyElement, App, Context, DismissEvent, Entity, FocusHandle, Focusable,
+    KeyContext, ListState, Pixels, Render, SharedString, Task, WeakEntity, Window, WindowHandle,
+    linear_color_stop, linear_gradient, list, prelude::*, px,
 };
 use menu::{
     Cancel, Confirm, SelectChild, SelectFirst, SelectLast, SelectNext, SelectParent, SelectPrevious,
@@ -33,6 +34,7 @@ use serde::{Deserialize, Serialize};
 use settings::Settings as _;
 use std::collections::{HashMap, HashSet};
 use std::mem;
+use std::path::PathBuf;
 use std::rc::Rc;
 use theme::ActiveTheme;
 use ui::{
@@ -41,12 +43,12 @@ use ui::{
     WithScrollbar, prelude::*,
 };
 use util::ResultExt as _;
-use util::path_list::{PathList, SerializedPathList};
+use util::path_list::PathList;
 use workspace::{
     AddFolderToProject, CloseWindow, FocusWorkspaceSidebar, MultiWorkspace, MultiWorkspaceEvent,
-    NextProject, NextThread, Open, PreviousProject, PreviousThread, ShowFewerThreads,
-    ShowMoreThreads, Sidebar as WorkspaceSidebar, SidebarSide, ToggleWorkspaceSidebar, Workspace,
-    sidebar_side_context_menu,
+    NextProject, NextThread, Open, PreviousProject, PreviousThread, SerializedProjectGroupKey,
+    ShowFewerThreads, ShowMoreThreads, Sidebar as WorkspaceSidebar, SidebarSide, Toast,
+    ToggleWorkspaceSidebar, Workspace, notifications::NotificationId, sidebar_side_context_menu,
 };
 
 use zed_actions::OpenRecent;
@@ -94,9 +96,9 @@ struct SerializedSidebar {
     #[serde(default)]
     width: Option<f32>,
     #[serde(default)]
-    collapsed_groups: Vec<SerializedPathList>,
+    collapsed_groups: Vec<SerializedProjectGroupKey>,
     #[serde(default)]
-    expanded_groups: Vec<(SerializedPathList, usize)>,
+    expanded_groups: Vec<(SerializedProjectGroupKey, usize)>,
     #[serde(default)]
     active_view: SerializedSidebarView,
 }
@@ -108,6 +110,11 @@ enum SidebarView {
     Archive(Entity<ThreadsArchiveView>),
 }
 
+enum ArchiveWorktreeOutcome {
+    Success,
+    Cancelled,
+}
+
 #[derive(Clone, Debug)]
 enum ActiveEntry {
     Thread {
@@ -134,7 +141,12 @@ impl ActiveEntry {
             (ActiveEntry::Thread { session_id, .. }, ListEntry::Thread(thread)) => {
                 thread.metadata.session_id == *session_id
             }
-            (ActiveEntry::Draft(_workspace), ListEntry::DraftThread { .. }) => true,
+            (
+                ActiveEntry::Draft(_),
+                ListEntry::DraftThread {
+                    workspace: None, ..
+                },
+            ) => true,
             _ => false,
         }
     }
@@ -155,7 +167,25 @@ struct ActiveThreadInfo {
 #[derive(Clone)]
 enum ThreadEntryWorkspace {
     Open(Entity<Workspace>),
-    Closed(PathList),
+    Closed {
+        /// The paths this thread uses (may point to linked worktrees).
+        folder_paths: PathList,
+        /// The project group this thread belongs to.
+        project_group_key: ProjectGroupKey,
+    },
+}
+
+impl ThreadEntryWorkspace {
+    fn is_remote(&self, cx: &App) -> bool {
+        match self {
+            ThreadEntryWorkspace::Open(workspace) => {
+                !workspace.read(cx).project().read(cx).is_local()
+            }
+            ThreadEntryWorkspace::Closed {
+                project_group_key, ..
+            } => project_group_key.host().is_some(),
+        }
+    }
 }
 
 #[derive(Clone)]
@@ -208,6 +238,7 @@ enum ListEntry {
         has_running_threads: bool,
         waiting_thread_count: usize,
         is_active: bool,
+        has_threads: bool,
     },
     Thread(ThreadEntry),
     ViewMore {
@@ -217,16 +248,9 @@ enum ListEntry {
     /// The user's active draft thread. Shows a prefix of the currently-typed
     /// prompt, or "Untitled Thread" if the prompt is empty.
     DraftThread {
-        worktrees: Vec<WorktreeInfo>,
-    },
-    /// A convenience row for starting a new thread. Shown when a project group
-    /// has no threads, or when an open linked worktree workspace has no threads.
-    /// When `workspace` is `Some`, this entry is for a specific linked worktree
-    /// workspace and can be dismissed (removing that workspace).
-    NewThread {
         key: project::ProjectGroupKey,
-        worktrees: Vec<WorktreeInfo>,
         workspace: Option<Entity<Workspace>>,
+        worktrees: Vec<WorktreeInfo>,
     },
 }
 
@@ -247,37 +271,22 @@ impl ListEntry {
         match self {
             ListEntry::Thread(thread) => match &thread.workspace {
                 ThreadEntryWorkspace::Open(ws) => vec![ws.clone()],
-                ThreadEntryWorkspace::Closed(_) => Vec::new(),
+                ThreadEntryWorkspace::Closed { .. } => Vec::new(),
             },
-            ListEntry::DraftThread { .. } => {
-                vec![multi_workspace.workspace().clone()]
-            }
-            ListEntry::ProjectHeader { key, .. } => {
-                // The header only activates the main worktree workspace
-                // (the one whose root paths match the group key's path list).
-                multi_workspace
-                    .workspaces()
-                    .find(|ws| PathList::new(&ws.read(cx).root_paths(cx)) == *key.path_list())
-                    .cloned()
-                    .into_iter()
-                    .collect()
-            }
-            ListEntry::NewThread { key, workspace, .. } => {
-                // When the NewThread entry is for a specific linked worktree
-                // workspace, that workspace is reachable. Otherwise fall back
-                // to the main worktree workspace.
+            ListEntry::DraftThread { workspace, .. } => {
                 if let Some(ws) = workspace {
                     vec![ws.clone()]
                 } else {
-                    multi_workspace
-                        .workspaces()
-                        .find(|ws| PathList::new(&ws.read(cx).root_paths(cx)) == *key.path_list())
-                        .cloned()
-                        .into_iter()
-                        .collect()
+                    // workspace: None means this is the active draft,
+                    // which always lives on the current workspace.
+                    vec![multi_workspace.workspace().clone()]
                 }
             }
-            _ => Vec::new(),
+            ListEntry::ProjectHeader { key, .. } => multi_workspace
+                .workspaces_for_project_group(key, cx)
+                .cloned()
+                .collect(),
+            ListEntry::ViewMore { .. } => Vec::new(),
         }
     }
 }
@@ -354,35 +363,76 @@ fn workspace_path_list(workspace: &Entity<Workspace>, cx: &App) -> PathList {
 ///
 /// For each path in the thread's `folder_paths`, produces a
 /// [`WorktreeInfo`] with a short display name, full path, and whether
-/// the worktree is the main checkout or a linked git worktree.
-fn worktree_info_from_thread_paths(
-    folder_paths: &PathList,
-    group_key: &project::ProjectGroupKey,
-) -> impl Iterator<Item = WorktreeInfo> {
-    let main_paths = group_key.path_list().paths();
-    folder_paths.paths().iter().filter_map(|path| {
-        let is_main = main_paths.iter().any(|mp| mp.as_path() == path.as_path());
-        if is_main {
-            let name = path.file_name()?.to_string_lossy().to_string();
-            Some(WorktreeInfo {
-                name: SharedString::from(name),
-                full_path: SharedString::from(path.display().to_string()),
+/// the worktree is the main checkout or a linked git worktree. When
+/// multiple main paths exist and a linked worktree's short name alone
+/// wouldn't identify which main project it belongs to, the main project
+/// name is prefixed for disambiguation (e.g. `project:feature`).
+///
+fn worktree_info_from_thread_paths(worktree_paths: &ThreadWorktreePaths) -> Vec<WorktreeInfo> {
+    let mut infos: Vec<WorktreeInfo> = Vec::new();
+    let mut linked_short_names: Vec<(SharedString, SharedString)> = Vec::new();
+    let mut unique_main_count = HashSet::new();
+
+    for (main_path, folder_path) in worktree_paths.ordered_pairs() {
+        unique_main_count.insert(main_path.clone());
+        let is_linked = main_path != folder_path;
+
+        if is_linked {
+            let short_name = linked_worktree_short_name(main_path, folder_path).unwrap_or_default();
+            let project_name = main_path
+                .file_name()
+                .map(|n| SharedString::from(n.to_string_lossy().to_string()))
+                .unwrap_or_default();
+            linked_short_names.push((short_name.clone(), project_name));
+            infos.push(WorktreeInfo {
+                name: short_name,
+                full_path: SharedString::from(folder_path.display().to_string()),
                 highlight_positions: Vec::new(),
-                kind: ui::WorktreeKind::Main,
-            })
+                kind: ui::WorktreeKind::Linked,
+            });
         } else {
-            let main_path = main_paths
-                .iter()
-                .find(|mp| mp.file_name() == path.file_name())
-                .or(main_paths.first())?;
-            Some(WorktreeInfo {
-                name: linked_worktree_short_name(main_path, path).unwrap_or_default(),
-                full_path: SharedString::from(path.display().to_string()),
+            let Some(name) = folder_path.file_name() else {
+                continue;
+            };
+            infos.push(WorktreeInfo {
+                name: SharedString::from(name.to_string_lossy().to_string()),
+                full_path: SharedString::from(folder_path.display().to_string()),
                 highlight_positions: Vec::new(),
-                kind: ui::WorktreeKind::Linked,
-            })
+                kind: ui::WorktreeKind::Main,
+            });
         }
-    })
+    }
+
+    // When the group has multiple main worktree paths and the thread's
+    // folder paths don't all share the same short name, prefix each
+    // linked worktree chip with its main project name so the user knows
+    // which project it belongs to.
+    let all_same_name = infos.len() > 1 && infos.iter().all(|i| i.name == infos[0].name);
+
+    if unique_main_count.len() > 1 && !all_same_name {
+        for (info, (_short_name, project_name)) in infos
+            .iter_mut()
+            .filter(|i| i.kind == ui::WorktreeKind::Linked)
+            .zip(linked_short_names.iter())
+        {
+            info.name = SharedString::from(format!("{}:{}", project_name, info.name));
+        }
+    }
+
+    infos
+}
+
+/// Shows a [`RemoteConnectionModal`] on the given workspace and establishes
+/// an SSH connection. Suitable for passing to
+/// [`MultiWorkspace::find_or_create_workspace`] as the `connect_remote`
+/// argument.
+fn connect_remote(
+    modal_workspace: Entity<Workspace>,
+    connection_options: RemoteConnectionOptions,
+    window: &mut Window,
+    cx: &mut Context<MultiWorkspace>,
+) -> gpui::Task<anyhow::Result<Option<Entity<remote::RemoteClient>>>> {
+    remote_connection::connect_with_modal(&modal_workspace, connection_options, window, cx)
 }
 
 /// The sidebar re-derives its entire entry list from scratch on every
@@ -403,8 +453,8 @@ pub struct Sidebar {
     /// Tracks which sidebar entry is currently active (highlighted).
     active_entry: Option<ActiveEntry>,
     hovered_thread_index: Option<usize>,
-    collapsed_groups: HashSet<PathList>,
-    expanded_groups: HashMap<PathList, usize>,
+    collapsed_groups: HashSet<ProjectGroupKey>,
+    expanded_groups: HashMap<ProjectGroupKey, usize>,
     /// Updated only in response to explicit user actions (clicking a
     /// thread, confirming in the thread switcher, etc.) — never from
     /// background data changes. Used to sort the thread switcher popup.
@@ -415,7 +465,9 @@ pub struct Sidebar {
     thread_last_message_sent_or_queued: HashMap<acp::SessionId, DateTime<Utc>>,
     thread_switcher: Option<Entity<ThreadSwitcher>>,
     _thread_switcher_subscriptions: Vec<gpui::Subscription>,
+    pending_remote_thread_activation: Option<acp::SessionId>,
     view: SidebarView,
+    restoring_tasks: HashMap<acp::SessionId, Task<()>>,
     recent_projects_popover_handle: PopoverMenuHandle<SidebarRecentProjects>,
     project_header_menu_ix: Option<usize>,
     _subscriptions: Vec<gpui::Subscription>,
@@ -454,6 +506,34 @@ impl Sidebar {
                 MultiWorkspaceEvent::WorkspaceRemoved(_) => {
                     this.update_entries(cx);
                 }
+                MultiWorkspaceEvent::WorktreePathAdded {
+                    old_main_paths,
+                    added_path,
+                } => {
+                    let added_path = added_path.clone();
+                    ThreadMetadataStore::global(cx).update(cx, |store, cx| {
+                        store.change_worktree_paths(
+                            old_main_paths,
+                            |paths| paths.add_path(&added_path, &added_path),
+                            cx,
+                        );
+                    });
+                    this.update_entries(cx);
+                }
+                MultiWorkspaceEvent::WorktreePathRemoved {
+                    old_main_paths,
+                    removed_path,
+                } => {
+                    let removed_path = removed_path.clone();
+                    ThreadMetadataStore::global(cx).update(cx, |store, cx| {
+                        store.change_worktree_paths(
+                            old_main_paths,
+                            |paths| paths.remove_main_path(&removed_path),
+                            cx,
+                        );
+                    });
+                    this.update_entries(cx);
+                }
             },
         )
         .detach();
@@ -501,7 +581,9 @@ impl Sidebar {
             thread_last_message_sent_or_queued: HashMap::new(),
             thread_switcher: None,
             _thread_switcher_subscriptions: Vec::new(),
+            pending_remote_thread_activation: None,
             view: SidebarView::default(),
+            restoring_tasks: HashMap::new(),
             recent_projects_popover_handle: PopoverMenuHandle::default(),
             project_header_menu_ix: None,
             _subscriptions: Vec::new(),
@@ -711,28 +793,40 @@ impl Sidebar {
         result
     }
 
-    /// Finds the main worktree workspace for a project group.
-    fn workspace_for_group(&self, path_list: &PathList, cx: &App) -> Option<Entity<Workspace>> {
-        let mw = self.multi_workspace.upgrade()?;
-        mw.read(cx).workspace_for_paths(path_list, cx)
-    }
-
     /// Opens a new workspace for a group that has no open workspaces.
     fn open_workspace_for_group(
         &mut self,
-        path_list: &PathList,
+        project_group_key: &ProjectGroupKey,
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
         let Some(multi_workspace) = self.multi_workspace.upgrade() else {
             return;
         };
+        let path_list = project_group_key.path_list().clone();
+        let host = project_group_key.host();
+        let provisional_key = Some(project_group_key.clone());
+        let active_workspace = multi_workspace.read(cx).workspace().clone();
+        let modal_workspace = active_workspace.clone();
+
+        let task = multi_workspace.update(cx, |this, cx| {
+            this.find_or_create_workspace(
+                path_list,
+                host,
+                provisional_key,
+                |options, window, cx| connect_remote(active_workspace, options, window, cx),
+                window,
+                cx,
+            )
+        });
 
-        multi_workspace
-            .update(cx, |this, cx| {
-                this.find_or_create_local_workspace(path_list.clone(), window, cx)
-            })
-            .detach_and_log_err(cx);
+        cx.spawn_in(window, async move |_this, cx| {
+            let result = task.await;
+            remote_connection::dismiss_connection_modal(&modal_workspace, cx);
+            result?;
+            anyhow::Ok(())
+        })
+        .detach_and_log_err(cx);
     }
 
     /// Rebuilds the sidebar contents from current workspace and thread state.
@@ -770,15 +864,25 @@ impl Sidebar {
         // also appears as a "draft" (no messages yet).
         if let Some(active_ws) = &active_workspace {
             if let Some(panel) = active_ws.read(cx).panel::<AgentPanel>(cx) {
-                if panel.read(cx).active_thread_is_draft(cx)
-                    || panel.read(cx).active_conversation_view().is_none()
-                {
-                    let conversation_parent_id = panel
-                        .read(cx)
-                        .active_conversation_view()
-                        .and_then(|cv| cv.read(cx).parent_id(cx));
-                    let preserving_thread =
-                        if let Some(ActiveEntry::Thread { session_id, .. }) = &self.active_entry {
+                let active_thread_is_draft = panel.read(cx).active_thread_is_draft(cx);
+                let active_conversation_view = panel.read(cx).active_conversation_view();
+
+                if active_thread_is_draft || active_conversation_view.is_none() {
+                    if active_conversation_view.is_none()
+                        && let Some(session_id) = self.pending_remote_thread_activation.clone()
+                    {
+                        self.active_entry = Some(ActiveEntry::Thread {
+                            session_id,
+                            workspace: active_ws.clone(),
+                        });
+                    } else {
+                        let conversation_parent_id =
+                            active_conversation_view.and_then(|cv| cv.read(cx).parent_id(cx));
+                        let preserving_thread = if let Some(ActiveEntry::Thread {
+                            session_id,
+                            ..
+                        }) = &self.active_entry
+                        {
                             self.active_entry_workspace() == Some(active_ws)
                                 && conversation_parent_id
                                     .as_ref()
@@ -786,14 +890,16 @@ impl Sidebar {
                         } else {
                             false
                         };
-                    if !preserving_thread {
-                        self.active_entry = Some(ActiveEntry::Draft(active_ws.clone()));
+                        if !preserving_thread {
+                            self.active_entry = Some(ActiveEntry::Draft(active_ws.clone()));
+                        }
                     }
-                } else if let Some(session_id) = panel
-                    .read(cx)
-                    .active_conversation_view()
-                    .and_then(|cv| cv.read(cx).parent_id(cx))
+                } else if let Some(session_id) =
+                    active_conversation_view.and_then(|cv| cv.read(cx).parent_id(cx))
                 {
+                    if self.pending_remote_thread_activation.as_ref() == Some(&session_id) {
+                        self.pending_remote_thread_activation = None;
+                    }
                     self.active_entry = Some(ActiveEntry::Thread {
                         session_id,
                         workspace: active_ws.clone(),
@@ -838,15 +944,29 @@ impl Sidebar {
             (icon, icon_from_external_svg)
         };
 
-        for (group_key, group_workspaces) in mw.project_groups(cx) {
-            let path_list = group_key.path_list().clone();
-            if path_list.paths().is_empty() {
+        let groups: Vec<_> = mw.project_groups(cx).collect();
+
+        let mut all_paths: Vec<PathBuf> = groups
+            .iter()
+            .flat_map(|(key, _)| key.path_list().paths().iter().cloned())
+            .collect();
+        all_paths.sort();
+        all_paths.dedup();
+        let path_details =
+            util::disambiguate::compute_disambiguation_details(&all_paths, |path, detail| {
+                project::path_suffix(path, detail)
+            });
+        let path_detail_map: HashMap<PathBuf, usize> =
+            all_paths.into_iter().zip(path_details).collect();
+
+        for (group_key, group_workspaces) in &groups {
+            if group_key.path_list().paths().is_empty() {
                 continue;
             }
 
-            let label = group_key.display_name();
+            let label = group_key.display_name(&path_detail_map);
 
-            let is_collapsed = self.collapsed_groups.contains(&path_list);
+            let is_collapsed = self.collapsed_groups.contains(&group_key);
             let should_load_threads = !is_collapsed || !query.is_empty();
 
             let is_active = active_workspace
@@ -881,40 +1001,41 @@ impl Sidebar {
                 // Open; otherwise use Closed.
                 let resolve_workspace = |row: &ThreadMetadata| -> ThreadEntryWorkspace {
                     workspace_by_path_list
-                        .get(&row.folder_paths)
+                        .get(row.folder_paths())
                         .map(|ws| ThreadEntryWorkspace::Open((*ws).clone()))
-                        .unwrap_or_else(|| ThreadEntryWorkspace::Closed(row.folder_paths.clone()))
+                        .unwrap_or_else(|| ThreadEntryWorkspace::Closed {
+                            folder_paths: row.folder_paths().clone(),
+                            project_group_key: group_key.clone(),
+                        })
                 };
 
                 // Build a ThreadEntry from a metadata row.
-                let make_thread_entry = |row: ThreadMetadata,
-                                         workspace: ThreadEntryWorkspace|
-                 -> ThreadEntry {
-                    let (icon, icon_from_external_svg) = resolve_agent_icon(&row.agent_id);
-                    let worktrees: Vec<WorktreeInfo> =
-                        worktree_info_from_thread_paths(&row.folder_paths, &group_key).collect();
-                    ThreadEntry {
-                        metadata: row,
-                        icon,
-                        icon_from_external_svg,
-                        status: AgentThreadStatus::default(),
-                        workspace,
-                        is_live: false,
-                        is_background: false,
-                        is_title_generating: false,
-                        highlight_positions: Vec::new(),
-                        worktrees,
-                        diff_stats: DiffStats::default(),
-                    }
-                };
+                let make_thread_entry =
+                    |row: ThreadMetadata, workspace: ThreadEntryWorkspace| -> ThreadEntry {
+                        let (icon, icon_from_external_svg) = resolve_agent_icon(&row.agent_id);
+                        let worktrees = worktree_info_from_thread_paths(&row.worktree_paths);
+                        ThreadEntry {
+                            metadata: row,
+                            icon,
+                            icon_from_external_svg,
+                            status: AgentThreadStatus::default(),
+                            workspace,
+                            is_live: false,
+                            is_background: false,
+                            is_title_generating: false,
+                            highlight_positions: Vec::new(),
+                            worktrees,
+                            diff_stats: DiffStats::default(),
+                        }
+                    };
 
-                // === Main code path: one query per group via main_worktree_paths ===
+                // Main code path: one query per group via main_worktree_paths.
                 // The main_worktree_paths column is set on all new threads and
                 // points to the group's canonical paths regardless of which
                 // linked worktree the thread was opened in.
                 for row in thread_store
                     .read(cx)
-                    .entries_for_main_worktree_path(&path_list)
+                    .entries_for_main_worktree_path(group_key.path_list())
                     .cloned()
                 {
                     if !seen_session_ids.insert(row.session_id.clone()) {
@@ -928,7 +1049,11 @@ impl Sidebar {
                 // must be queried by their `folder_paths`.
 
                 // Load any legacy threads for the main worktrees of this project group.
-                for row in thread_store.read(cx).entries_for_path(&path_list).cloned() {
+                for row in thread_store
+                    .read(cx)
+                    .entries_for_path(group_key.path_list())
+                    .cloned()
+                {
                     if !seen_session_ids.insert(row.session_id.clone()) {
                         continue;
                     }
@@ -938,7 +1063,7 @@ impl Sidebar {
 
                 // Load any legacy threads for any single linked wortree of this project group.
                 let mut linked_worktree_paths = HashSet::new();
-                for workspace in &group_workspaces {
+                for workspace in group_workspaces {
                     if workspace.read(cx).visible_worktrees(cx).count() != 1 {
                         continue;
                     }
@@ -960,7 +1085,10 @@ impl Sidebar {
                         }
                         threads.push(make_thread_entry(
                             row,
-                            ThreadEntryWorkspace::Closed(worktree_path_list.clone()),
+                            ThreadEntryWorkspace::Closed {
+                                folder_paths: worktree_path_list.clone(),
+                                project_group_key: group_key.clone(),
+                            },
                         ));
                     }
                 }
@@ -1033,6 +1161,20 @@ impl Sidebar {
                 }
             }
 
+            let has_threads = if !threads.is_empty() {
+                true
+            } else {
+                let store = ThreadMetadataStore::global(cx).read(cx);
+                store
+                    .entries_for_main_worktree_path(group_key.path_list())
+                    .next()
+                    .is_some()
+                    || store
+                        .entries_for_path(group_key.path_list())
+                        .next()
+                        .is_some()
+            };
+
             if !query.is_empty() {
                 let workspace_highlight_positions =
                     fuzzy_match_positions(&query, &label).unwrap_or_default();
@@ -1071,6 +1213,7 @@ impl Sidebar {
                     has_running_threads,
                     waiting_thread_count,
                     is_active,
+                    has_threads,
                 });
 
                 for thread in matched_threads {
@@ -1089,6 +1232,7 @@ impl Sidebar {
                     has_running_threads,
                     waiting_thread_count,
                     is_active,
+                    has_threads,
                 });
 
                 if is_collapsed {
@@ -1098,42 +1242,47 @@ impl Sidebar {
                 // Emit a DraftThread entry when the active draft belongs to this group.
                 if is_draft_for_group {
                     if let Some(ActiveEntry::Draft(draft_ws)) = &self.active_entry {
-                        let ws_path_list = workspace_path_list(draft_ws, cx);
-                        let worktrees = worktree_info_from_thread_paths(&ws_path_list, &group_key);
+                        let ws_worktree_paths = ThreadWorktreePaths::from_project(
+                            draft_ws.read(cx).project().read(cx),
+                            cx,
+                        );
+                        let worktrees = worktree_info_from_thread_paths(&ws_worktree_paths);
                         entries.push(ListEntry::DraftThread {
-                            worktrees: worktrees.collect(),
+                            key: group_key.clone(),
+                            workspace: None,
+                            worktrees,
                         });
                     }
                 }
 
-                // Emit NewThread entries:
-                // 1. When the group has zero threads (convenient affordance).
-                // 2. For each open linked worktree workspace in this group
-                //    that has no threads (makes the workspace reachable and
-                //    dismissable).
-                let group_has_no_threads = threads.is_empty() && !group_workspaces.is_empty();
-
-                if !is_draft_for_group && group_has_no_threads {
-                    entries.push(ListEntry::NewThread {
-                        key: group_key.clone(),
-                        worktrees: Vec::new(),
-                        workspace: None,
-                    });
-                }
-
-                // Emit a NewThread for each open linked worktree workspace
-                // that has no threads. Skip the workspace if it's showing
-                // the active draft (it already has a DraftThread entry).
-                if !is_draft_for_group {
+                // Emit a DraftThread for each open linked worktree workspace
+                // that has no threads. Skip the specific workspace that is
+                // showing the active draft (it already has a DraftThread entry
+                // from the block above).
+                {
+                    let draft_ws_id = if is_draft_for_group {
+                        self.active_entry.as_ref().and_then(|e| match e {
+                            ActiveEntry::Draft(ws) => Some(ws.entity_id()),
+                            _ => None,
+                        })
+                    } else {
+                        None
+                    };
                     let thread_store = ThreadMetadataStore::global(cx);
-                    for ws in &group_workspaces {
-                        let ws_path_list = workspace_path_list(ws, cx);
+                    for ws in group_workspaces {
+                        if Some(ws.entity_id()) == draft_ws_id {
+                            continue;
+                        }
+                        let ws_worktree_paths =
+                            ThreadWorktreePaths::from_project(ws.read(cx).project().read(cx), cx);
                         let has_linked_worktrees =
-                            worktree_info_from_thread_paths(&ws_path_list, &group_key)
+                            worktree_info_from_thread_paths(&ws_worktree_paths)
+                                .iter()
                                 .any(|wt| wt.kind == ui::WorktreeKind::Linked);
                         if !has_linked_worktrees {
                             continue;
                         }
+                        let ws_path_list = workspace_path_list(ws, cx);
                         let store = thread_store.read(cx);
                         let has_threads = store.entries_for_path(&ws_path_list).next().is_some()
                             || store
@@ -1143,19 +1292,19 @@ impl Sidebar {
                         if has_threads {
                             continue;
                         }
-                        let worktrees: Vec<WorktreeInfo> =
-                            worktree_info_from_thread_paths(&ws_path_list, &group_key).collect();
-                        entries.push(ListEntry::NewThread {
+                        let worktrees = worktree_info_from_thread_paths(&ws_worktree_paths);
+
+                        entries.push(ListEntry::DraftThread {
                             key: group_key.clone(),
-                            worktrees,
                             workspace: Some(ws.clone()),
+                            worktrees,
                         });
                     }
                 }
 
                 let total = threads.len();
 
-                let extra_batches = self.expanded_groups.get(&path_list).copied().unwrap_or(0);
+                let extra_batches = self.expanded_groups.get(&group_key).copied().unwrap_or(0);
                 let threads_to_show =
                     DEFAULT_THREADS_SHOWN + (extra_batches * DEFAULT_THREADS_SHOWN);
                 let count = threads_to_show.min(total);
@@ -1288,6 +1437,7 @@ impl Sidebar {
                 has_running_threads,
                 waiting_thread_count,
                 is_active: is_active_group,
+                has_threads,
             } => self.render_project_header(
                 ix,
                 false,
@@ -1298,21 +1448,25 @@ impl Sidebar {
                 *waiting_thread_count,
                 *is_active_group,
                 is_selected,
+                *has_threads,
                 cx,
             ),
             ListEntry::Thread(thread) => self.render_thread(ix, thread, is_active, is_selected, cx),
             ListEntry::ViewMore {
                 key,
                 is_fully_expanded,
-            } => self.render_view_more(ix, key.path_list(), *is_fully_expanded, is_selected, cx),
-            ListEntry::DraftThread { worktrees, .. } => {
-                self.render_draft_thread(ix, is_active, worktrees, is_selected, cx)
-            }
-            ListEntry::NewThread {
+            } => self.render_view_more(ix, key, *is_fully_expanded, is_selected, cx),
+            ListEntry::DraftThread {
                 key,
-                worktrees,
                 workspace,
-            } => self.render_new_thread(ix, key, worktrees, workspace.as_ref(), is_selected, cx),
+                worktrees,
+            } => {
+                if workspace.is_some() {
+                    self.render_new_thread(ix, key, worktrees, workspace.as_ref(), is_selected, cx)
+                } else {
+                    self.render_draft_thread(ix, is_active, worktrees, is_selected, cx)
+                }
+            }
         };
 
         if is_group_header_after_first {
@@ -1362,9 +1516,9 @@ impl Sidebar {
         waiting_thread_count: usize,
         is_active: bool,
         is_focused: bool,
+        has_threads: bool,
         cx: &mut Context<Self>,
     ) -> AnyElement {
-        let path_list = key.path_list();
         let host = key.host();
 
         let id_prefix = if is_sticky { "sticky-" } else { "" };
@@ -1372,26 +1526,27 @@ impl Sidebar {
         let disclosure_id = SharedString::from(format!("disclosure-{ix}"));
         let group_name = SharedString::from(format!("{id_prefix}header-group-{ix}"));
 
-        let is_collapsed = self.collapsed_groups.contains(path_list);
+        let is_collapsed = self.collapsed_groups.contains(key);
         let (disclosure_icon, disclosure_tooltip) = if is_collapsed {
             (IconName::ChevronRight, "Expand Project")
         } else {
             (IconName::ChevronDown, "Collapse Project")
         };
 
-        let has_new_thread_entry = self.contents.entries.get(ix + 1).is_some_and(|entry| {
-            matches!(
-                entry,
-                ListEntry::NewThread { .. } | ListEntry::DraftThread { .. }
-            )
-        });
+        let has_new_thread_entry = self
+            .contents
+            .entries
+            .get(ix + 1)
+            .is_some_and(|entry| matches!(entry, ListEntry::DraftThread { .. }));
         let show_new_thread_button = !has_new_thread_entry && !self.has_filter_query(cx);
+        let workspace = self.multi_workspace.upgrade().and_then(|mw| {
+            mw.read(cx)
+                .workspace_for_paths(key.path_list(), key.host().as_ref(), cx)
+        });
 
-        let workspace = self.workspace_for_group(path_list, cx);
-
-        let path_list_for_toggle = path_list.clone();
-        let path_list_for_collapse = path_list.clone();
-        let view_more_expanded = self.expanded_groups.contains_key(path_list);
+        let key_for_toggle = key.clone();
+        let key_for_collapse = key.clone();
+        let view_more_expanded = self.expanded_groups.contains_key(key);
 
         let label = if highlight_positions.is_empty() {
             Label::new(label.clone())
@@ -1408,6 +1563,8 @@ impl Sidebar {
             .element_active
             .blend(color.element_background.opacity(0.2));
 
+        let is_ellipsis_menu_open = self.project_header_menu_ix == Some(ix);
+
         h_flex()
             .id(id)
             .group(&group_name)
@@ -1426,7 +1583,6 @@ impl Sidebar {
             .justify_between()
             .child(
                 h_flex()
-                    .cursor_pointer()
                     .relative()
                     .min_w_0()
                     .w_full()
@@ -1439,7 +1595,7 @@ impl Sidebar {
                             .tooltip(Tooltip::text(disclosure_tooltip))
                             .on_click(cx.listener(move |this, _, window, cx| {
                                 this.selection = None;
-                                this.toggle_collapse(&path_list_for_toggle, window, cx);
+                                this.toggle_collapse(&key_for_toggle, window, cx);
                             })),
                     )
                     .child(label)
@@ -1479,13 +1635,13 @@ impl Sidebar {
             )
             .child(
                 h_flex()
-                    .when(self.project_header_menu_ix != Some(ix), |this| {
-                        this.visible_on_hover(group_name)
+                    .when(!is_ellipsis_menu_open, |this| {
+                        this.visible_on_hover(&group_name)
                     })
                     .on_mouse_down(gpui::MouseButton::Left, |_, _, cx| {
                         cx.stop_propagation();
                     })
-                    .child(self.render_project_header_menu(ix, id_prefix, key, cx))
+                    .child(self.render_project_header_ellipsis_menu(ix, id_prefix, key, cx))
                     .when(view_more_expanded && !is_collapsed, |this| {
                         this.child(
                             IconButton::new(
@@ -1497,10 +1653,10 @@ impl Sidebar {
                             .icon_size(IconSize::Small)
                             .tooltip(Tooltip::text("Collapse Displayed Threads"))
                             .on_click(cx.listener({
-                                let path_list_for_collapse = path_list_for_collapse.clone();
+                                let key_for_collapse = key_for_collapse.clone();
                                 move |this, _, _window, cx| {
                                     this.selection = None;
-                                    this.expanded_groups.remove(&path_list_for_collapse);
+                                    this.expanded_groups.remove(&key_for_collapse);
                                     this.serialize(cx);
                                     this.update_entries(cx);
                                 }
@@ -1510,7 +1666,8 @@ impl Sidebar {
                     .when_some(
                         workspace.filter(|_| show_new_thread_button),
                         |this, workspace| {
-                            let path_list = path_list.clone();
+                            let key = key.clone();
+                            let focus_handle = self.focus_handle.clone();
                             this.child(
                                 IconButton::new(
                                     SharedString::from(format!(
@@ -1519,10 +1676,17 @@ impl Sidebar {
                                     IconName::Plus,
                                 )
                                 .icon_size(IconSize::Small)
-                                .tooltip(Tooltip::text("New Thread"))
+                                .tooltip(move |_, cx| {
+                                    Tooltip::for_action_in(
+                                        "New Thread",
+                                        &NewThread,
+                                        &focus_handle,
+                                        cx,
+                                    )
+                                })
                                 .on_click(cx.listener(
                                     move |this, _, window, cx| {
-                                        this.collapsed_groups.remove(&path_list);
+                                        this.collapsed_groups.remove(&key);
                                         this.selection = None;
                                         this.create_new_thread(&workspace, window, cx);
                                     },
@@ -1532,32 +1696,42 @@ impl Sidebar {
                     ),
             )
             .map(|this| {
-                let path_list = path_list.clone();
-                this.cursor_pointer()
-                    .when(!is_active, |this| this.hover(|s| s.bg(hover_color)))
-                    .tooltip(Tooltip::text("Open Workspace"))
-                    .on_click(cx.listener(move |this, _, window, cx| {
-                        if let Some(workspace) = this.workspace_for_group(&path_list, cx) {
-                            this.active_entry = Some(ActiveEntry::Draft(workspace.clone()));
-                            if let Some(multi_workspace) = this.multi_workspace.upgrade() {
-                                multi_workspace.update(cx, |multi_workspace, cx| {
-                                    multi_workspace.activate(workspace.clone(), window, cx);
-                                });
-                            }
-                            if AgentPanel::is_visible(&workspace, cx) {
-                                workspace.update(cx, |workspace, cx| {
-                                    workspace.focus_panel::<AgentPanel>(window, cx);
-                                });
+                if !has_threads && is_active {
+                    this
+                } else {
+                    let key = key.clone();
+                    this.cursor_pointer()
+                        .when(!is_active, |this| this.hover(|s| s.bg(hover_color)))
+                        .tooltip(Tooltip::text("Open Workspace"))
+                        .on_click(cx.listener(move |this, _, window, cx| {
+                            if let Some(workspace) = this.multi_workspace.upgrade().and_then(|mw| {
+                                mw.read(cx).workspace_for_paths(
+                                    key.path_list(),
+                                    key.host().as_ref(),
+                                    cx,
+                                )
+                            }) {
+                                this.active_entry = Some(ActiveEntry::Draft(workspace.clone()));
+                                if let Some(multi_workspace) = this.multi_workspace.upgrade() {
+                                    multi_workspace.update(cx, |multi_workspace, cx| {
+                                        multi_workspace.activate(workspace.clone(), window, cx);
+                                    });
+                                }
+                                if AgentPanel::is_visible(&workspace, cx) {
+                                    workspace.update(cx, |workspace, cx| {
+                                        workspace.focus_panel::<AgentPanel>(window, cx);
+                                    });
+                                }
+                            } else {
+                                this.open_workspace_for_group(&key, window, cx);
                             }
-                        } else {
-                            this.open_workspace_for_group(&path_list, window, cx);
-                        }
-                    }))
+                        }))
+                }
             })
             .into_any_element()
     }
 
-    fn render_project_header_menu(
+    fn render_project_header_ellipsis_menu(
         &self,
         ix: usize,
         id_prefix: &str,
@@ -1583,72 +1757,79 @@ impl Sidebar {
                 let multi_workspace = multi_workspace.clone();
                 let project_group_key = project_group_key.clone();
 
-                let menu = ContextMenu::build_persistent(window, cx, move |menu, _window, _cx| {
-                    let mut menu = menu
-                        .header("Project Folders")
-                        .end_slot_action(Box::new(menu::EndSlot));
+                let menu =
+                    ContextMenu::build_persistent(window, cx, move |menu, _window, menu_cx| {
+                        let weak_menu = menu_cx.weak_entity();
+                        let mut menu = menu
+                            .header("Project Folders")
+                            .end_slot_action(Box::new(menu::EndSlot));
 
-                    for path in project_group_key.path_list().paths() {
-                        let Some(name) = path.file_name() else {
-                            continue;
-                        };
-                        let name: SharedString = name.to_string_lossy().into_owned().into();
-                        let path = path.clone();
-                        let project_group_key = project_group_key.clone();
-                        let multi_workspace = multi_workspace.clone();
-                        menu = menu.entry_with_end_slot_on_hover(
-                            name.clone(),
-                            None,
-                            |_, _| {},
-                            IconName::Close,
-                            "Remove Folder".into(),
-                            move |_window, cx| {
-                                multi_workspace
-                                    .update(cx, |multi_workspace, cx| {
-                                        multi_workspace.remove_folder_from_project_group(
-                                            &project_group_key,
-                                            &path,
-                                            cx,
-                                        );
-                                    })
-                                    .ok();
+                        for path in project_group_key.path_list().paths() {
+                            let Some(name) = path.file_name() else {
+                                continue;
+                            };
+                            let name: SharedString = name.to_string_lossy().into_owned().into();
+                            let path = path.clone();
+                            let project_group_key = project_group_key.clone();
+                            let multi_workspace = multi_workspace.clone();
+                            let weak_menu = weak_menu.clone();
+                            menu = menu.entry_with_end_slot_on_hover(
+                                name.clone(),
+                                None,
+                                |_, _| {},
+                                IconName::Close,
+                                "Remove Folder".into(),
+                                move |_window, cx| {
+                                    multi_workspace
+                                        .update(cx, |multi_workspace, cx| {
+                                            multi_workspace.remove_folder_from_project_group(
+                                                &project_group_key,
+                                                &path,
+                                                cx,
+                                            );
+                                        })
+                                        .ok();
+                                    weak_menu.update(cx, |_, cx| cx.emit(DismissEvent)).ok();
+                                },
+                            );
+                        }
+
+                        let menu = menu.separator().entry(
+                            "Add Folder to Project",
+                            Some(Box::new(AddFolderToProject)),
+                            {
+                                let project_group_key = project_group_key.clone();
+                                let multi_workspace = multi_workspace.clone();
+                                let weak_menu = weak_menu.clone();
+                                move |window, cx| {
+                                    multi_workspace
+                                        .update(cx, |multi_workspace, cx| {
+                                            multi_workspace.prompt_to_add_folders_to_project_group(
+                                                &project_group_key,
+                                                window,
+                                                cx,
+                                            );
+                                        })
+                                        .ok();
+                                    weak_menu.update(cx, |_, cx| cx.emit(DismissEvent)).ok();
+                                }
                             },
                         );
-                    }
 
-                    let menu = menu.separator().entry(
-                        "Add Folder to Project",
-                        Some(Box::new(AddFolderToProject)),
-                        {
-                            let project_group_key = project_group_key.clone();
-                            let multi_workspace = multi_workspace.clone();
-                            move |window, cx| {
+                        let project_group_key = project_group_key.clone();
+                        let multi_workspace = multi_workspace.clone();
+                        menu.separator()
+                            .entry("Remove Project", None, move |window, cx| {
                                 multi_workspace
                                     .update(cx, |multi_workspace, cx| {
-                                        multi_workspace.prompt_to_add_folders_to_project_group(
-                                            &project_group_key,
-                                            window,
-                                            cx,
-                                        );
+                                        multi_workspace
+                                            .remove_project_group(&project_group_key, window, cx)
+                                            .detach_and_log_err(cx);
                                     })
                                     .ok();
-                            }
-                        },
-                    );
-
-                    let project_group_key = project_group_key.clone();
-                    let multi_workspace = multi_workspace.clone();
-                    menu.separator()
-                        .entry("Remove Project", None, move |window, cx| {
-                            multi_workspace
-                                .update(cx, |multi_workspace, cx| {
-                                    multi_workspace
-                                        .remove_project_group(&project_group_key, window, cx)
-                                        .detach_and_log_err(cx);
-                                })
-                                .ok();
-                        })
-                });
+                                weak_menu.update(cx, |_, cx| cx.emit(DismissEvent)).ok();
+                            })
+                    });
 
                 let this = this.clone();
                 window

crates/sidebar/src/sidebar_tests.rs 🔗

@@ -1,12 +1,12 @@
 use super::*;
-use acp_thread::StubAgentConnection;
+use acp_thread::{AcpThread, PermissionOptions, StubAgentConnection};
 use agent::ThreadStore;
 use agent_ui::{
     test_support::{active_session_id, open_thread_with_connection, send_message},
-    thread_metadata_store::ThreadMetadata,
+    thread_metadata_store::{ThreadMetadata, ThreadWorktreePaths},
 };
 use chrono::DateTime;
-use fs::FakeFs;
+use fs::{FakeFs, Fs};
 use gpui::TestAppContext;
 use pretty_assertions::assert_eq;
 use project::AgentId;
@@ -60,6 +60,75 @@ fn has_thread_entry(sidebar: &Sidebar, session_id: &acp::SessionId) -> bool {
         .any(|entry| matches!(entry, ListEntry::Thread(t) if &t.metadata.session_id == session_id))
 }
 
+#[track_caller]
+fn assert_remote_project_integration_sidebar_state(
+    sidebar: &mut Sidebar,
+    main_thread_id: &acp::SessionId,
+    remote_thread_id: &acp::SessionId,
+) {
+    let mut project_headers = sidebar.contents.entries.iter().filter_map(|entry| {
+        if let ListEntry::ProjectHeader { label, .. } = entry {
+            Some(label.as_ref())
+        } else {
+            None
+        }
+    });
+
+    let Some(project_header) = project_headers.next() else {
+        panic!("expected exactly one sidebar project header named `project`, found none");
+    };
+    assert_eq!(
+        project_header, "project",
+        "expected the only sidebar project header to be `project`"
+    );
+    if let Some(unexpected_header) = project_headers.next() {
+        panic!(
+            "expected exactly one sidebar project header named `project`, found extra header `{unexpected_header}`"
+        );
+    }
+
+    let mut saw_main_thread = false;
+    let mut saw_remote_thread = false;
+    for entry in &sidebar.contents.entries {
+        match entry {
+            ListEntry::ProjectHeader { label, .. } => {
+                assert_eq!(
+                    label.as_ref(),
+                    "project",
+                    "expected the only sidebar project header to be `project`"
+                );
+            }
+            ListEntry::Thread(thread) if &thread.metadata.session_id == main_thread_id => {
+                saw_main_thread = true;
+            }
+            ListEntry::Thread(thread) if &thread.metadata.session_id == remote_thread_id => {
+                saw_remote_thread = true;
+            }
+            ListEntry::Thread(thread) => {
+                let title = thread.metadata.title.as_ref();
+                panic!(
+                    "unexpected sidebar thread while simulating remote project integration flicker: title=`{title}`"
+                );
+            }
+            ListEntry::ViewMore { .. } => {
+                panic!(
+                    "unexpected `View More` entry while simulating remote project integration flicker"
+                );
+            }
+            ListEntry::DraftThread { .. } => {}
+        }
+    }
+
+    assert!(
+        saw_main_thread,
+        "expected the sidebar to keep showing `Main Thread` under `project`"
+    );
+    assert!(
+        saw_remote_thread,
+        "expected the sidebar to keep showing `Worktree Thread` under `project`"
+    );
+}
+
 async fn init_test_project(
     worktree_path: &str,
     cx: &mut TestAppContext,
@@ -157,31 +226,49 @@ fn save_thread_metadata(
     cx: &mut TestAppContext,
 ) {
     cx.update(|cx| {
-        let (folder_paths, main_worktree_paths) = {
-            let project_ref = project.read(cx);
-            let paths: Vec<Arc<Path>> = project_ref
-                .visible_worktrees(cx)
-                .map(|worktree| worktree.read(cx).abs_path())
-                .collect();
-            let folder_paths = PathList::new(&paths);
-            let main_worktree_paths = project_ref.project_group_key(cx).path_list().clone();
-            (folder_paths, main_worktree_paths)
-        };
+        let worktree_paths = ThreadWorktreePaths::from_project(project.read(cx), cx);
         let metadata = ThreadMetadata {
             session_id,
             agent_id: agent::ZED_AGENT_ID.clone(),
             title,
             updated_at,
             created_at,
-            folder_paths,
-            main_worktree_paths,
+            worktree_paths,
             archived: false,
+            remote_connection: None,
         };
         ThreadMetadataStore::global(cx).update(cx, |store, cx| store.save_manually(metadata, cx));
     });
     cx.run_until_parked();
 }
 
+fn save_thread_metadata_with_main_paths(
+    session_id: &str,
+    title: &str,
+    folder_paths: PathList,
+    main_worktree_paths: PathList,
+    cx: &mut TestAppContext,
+) {
+    let session_id = acp::SessionId::new(Arc::from(session_id));
+    let title = SharedString::from(title.to_string());
+    let updated_at = chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap();
+    let metadata = ThreadMetadata {
+        session_id,
+        agent_id: agent::ZED_AGENT_ID.clone(),
+        title,
+        updated_at,
+        created_at: None,
+        worktree_paths: ThreadWorktreePaths::from_path_lists(main_worktree_paths, folder_paths)
+            .unwrap(),
+        archived: false,
+        remote_connection: None,
+    };
+    cx.update(|cx| {
+        ThreadMetadataStore::global(cx).update(cx, |store, cx| store.save_manually(metadata, cx));
+    });
+    cx.run_until_parked();
+}
+
 fn focus_sidebar(sidebar: &Entity<Sidebar>, cx: &mut gpui::VisualTestContext) {
     sidebar.update_in(cx, |_, window, cx| {
         cx.focus_self(window);
@@ -189,6 +276,35 @@ fn focus_sidebar(sidebar: &Entity<Sidebar>, cx: &mut gpui::VisualTestContext) {
     cx.run_until_parked();
 }
 
+fn request_test_tool_authorization(
+    thread: &Entity<AcpThread>,
+    tool_call_id: &str,
+    option_id: &str,
+    cx: &mut gpui::VisualTestContext,
+) {
+    let tool_call_id = acp::ToolCallId::new(tool_call_id);
+    let label = format!("Tool {tool_call_id}");
+    let option_id = acp::PermissionOptionId::new(option_id);
+    let _authorization_task = cx.update(|_, cx| {
+        thread.update(cx, |thread, cx| {
+            thread
+                .request_tool_call_authorization(
+                    acp::ToolCall::new(tool_call_id, label)
+                        .kind(acp::ToolKind::Edit)
+                        .into(),
+                    PermissionOptions::Flat(vec![acp::PermissionOption::new(
+                        option_id,
+                        "Allow",
+                        acp::PermissionOptionKind::AllowOnce,
+                    )]),
+                    cx,
+                )
+                .unwrap()
+        })
+    });
+    cx.run_until_parked();
+}
+
 fn format_linked_worktree_chips(worktrees: &[WorktreeInfo]) -> String {
     let mut seen = Vec::new();
     let mut chips = Vec::new();
@@ -224,6 +340,11 @@ fn visible_entries_as_strings(
                 } else {
                     ""
                 };
+                let is_active = sidebar
+                    .active_entry
+                    .as_ref()
+                    .is_some_and(|active| active.matches_entry(entry));
+                let active_indicator = if is_active { " (active)" } else { "" };
                 match entry {
                     ListEntry::ProjectHeader {
                         label,
@@ -231,7 +352,7 @@ fn visible_entries_as_strings(
                         highlight_positions: _,
                         ..
                     } => {
-                        let icon = if sidebar.collapsed_groups.contains(key.path_list()) {
+                        let icon = if sidebar.collapsed_groups.contains(key) {
                             ">"
                         } else {
                             "v"
@@ -240,7 +361,7 @@ fn visible_entries_as_strings(
                     }
                     ListEntry::Thread(thread) => {
                         let title = thread.metadata.title.as_ref();
-                        let active = if thread.is_live { " *" } else { "" };
+                        let live = if thread.is_live { " *" } else { "" };
                         let status_str = match thread.status {
                             AgentThreadStatus::Running => " (running)",
                             AgentThreadStatus::Error => " (error)",
@@ -256,7 +377,7 @@ fn visible_entries_as_strings(
                             ""
                         };
                         let worktree = format_linked_worktree_chips(&thread.worktrees);
-                        format!("  {title}{worktree}{active}{status_str}{notified}{selected}")
+                        format!("  {title}{worktree}{live}{status_str}{notified}{active_indicator}{selected}")
                     }
                     ListEntry::ViewMore {
                         is_fully_expanded, ..
@@ -267,13 +388,17 @@ fn visible_entries_as_strings(
                             format!("  + View More{}", selected)
                         }
                     }
-                    ListEntry::DraftThread { worktrees, .. } => {
-                        let worktree = format_linked_worktree_chips(worktrees);
-                        format!("  [~ Draft{}]{}", worktree, selected)
-                    }
-                    ListEntry::NewThread { worktrees, .. } => {
+                    ListEntry::DraftThread {
+                        workspace,
+                        worktrees,
+                        ..
+                    } => {
                         let worktree = format_linked_worktree_chips(worktrees);
-                        format!("  [+ New Thread{}]{}", worktree, selected)
+                        if workspace.is_some() {
+                            format!("  [+ New Thread{}]{}", worktree, selected)
+                        } else {
+                            format!("  [~ Draft{}]{}{}", worktree, active_indicator, selected)
+                        }
                     }
                 }
             })
@@ -290,15 +415,13 @@ async fn test_serialization_round_trip(cx: &mut TestAppContext) {
 
     save_n_test_threads(3, &project, cx).await;
 
-    let path_list = project.read_with(cx, |project, cx| {
-        project.project_group_key(cx).path_list().clone()
-    });
+    let project_group_key = project.read_with(cx, |project, cx| project.project_group_key(cx));
 
     // Set a custom width, collapse the group, and expand "View More".
     sidebar.update_in(cx, |sidebar, window, cx| {
         sidebar.set_width(Some(px(420.0)), cx);
-        sidebar.toggle_collapse(&path_list, window, cx);
-        sidebar.expanded_groups.insert(path_list.clone(), 2);
+        sidebar.toggle_collapse(&project_group_key, window, cx);
+        sidebar.expanded_groups.insert(project_group_key.clone(), 2);
     });
     cx.run_until_parked();
 
@@ -336,8 +459,8 @@ async fn test_serialization_round_trip(cx: &mut TestAppContext) {
     assert_eq!(collapsed1, collapsed2);
     assert_eq!(expanded1, expanded2);
     assert_eq!(width1, px(420.0));
-    assert!(collapsed1.contains(&path_list));
-    assert_eq!(expanded1.get(&path_list), Some(&2));
+    assert!(collapsed1.contains(&project_group_key));
+    assert_eq!(expanded1.get(&project_group_key), Some(&2));
 }
 
 #[gpui::test]
@@ -443,7 +566,10 @@ async fn test_single_workspace_no_threads(cx: &mut TestAppContext) {
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  [+ New Thread]"]
+        vec![
+            //
+            "v [my-project]",
+        ]
     );
 }
 
@@ -479,6 +605,7 @@ async fn test_single_workspace_with_saved_threads(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix crash in project panel",
             "  Add inline diff view",
@@ -509,7 +636,11 @@ async fn test_workspace_lifecycle(cx: &mut TestAppContext) {
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [project-a]", "  Thread A1"]
+        vec![
+            //
+            "v [project-a]",
+            "  Thread A1",
+        ]
     );
 
     // Add a second workspace
@@ -520,7 +651,11 @@ async fn test_workspace_lifecycle(cx: &mut TestAppContext) {
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [project-a]", "  Thread A1",]
+        vec![
+            //
+            "v [project-a]",
+            "  Thread A1",
+        ]
     );
 }
 
@@ -539,6 +674,7 @@ async fn test_view_more_pagination(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Thread 12",
             "  Thread 11",
@@ -560,9 +696,7 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
     // Create 17 threads: initially shows 5, then 10, then 15, then all 17 with Collapse
     save_n_test_threads(17, &project, cx).await;
 
-    let path_list = project.read_with(cx, |project, cx| {
-        project.project_group_key(cx).path_list().clone()
-    });
+    let project_group_key = project.read_with(cx, |project, cx| project.project_group_key(cx));
 
     multi_workspace.update_in(cx, |_, _window, cx| cx.notify());
     cx.run_until_parked();
@@ -587,8 +721,13 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
 
     // Expand again by one batch
     sidebar.update_in(cx, |s, _window, cx| {
-        let current = s.expanded_groups.get(&path_list).copied().unwrap_or(0);
-        s.expanded_groups.insert(path_list.clone(), current + 1);
+        let current = s
+            .expanded_groups
+            .get(&project_group_key)
+            .copied()
+            .unwrap_or(0);
+        s.expanded_groups
+            .insert(project_group_key.clone(), current + 1);
         s.update_entries(cx);
     });
     cx.run_until_parked();
@@ -600,8 +739,13 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
 
     // Expand one more time - should show all 17 threads with Collapse button
     sidebar.update_in(cx, |s, _window, cx| {
-        let current = s.expanded_groups.get(&path_list).copied().unwrap_or(0);
-        s.expanded_groups.insert(path_list.clone(), current + 1);
+        let current = s
+            .expanded_groups
+            .get(&project_group_key)
+            .copied()
+            .unwrap_or(0);
+        s.expanded_groups
+            .insert(project_group_key.clone(), current + 1);
         s.update_entries(cx);
     });
     cx.run_until_parked();
@@ -614,7 +758,7 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
 
     // Click collapse - should go back to showing 5 threads
     sidebar.update_in(cx, |s, _window, cx| {
-        s.expanded_groups.remove(&path_list);
+        s.expanded_groups.remove(&project_group_key);
         s.update_entries(cx);
     });
     cx.run_until_parked();
@@ -634,38 +778,47 @@ async fn test_collapse_and_expand_group(cx: &mut TestAppContext) {
 
     save_n_test_threads(1, &project, cx).await;
 
-    let path_list = project.read_with(cx, |project, cx| {
-        project.project_group_key(cx).path_list().clone()
-    });
+    let project_group_key = project.read_with(cx, |project, cx| project.project_group_key(cx));
 
     multi_workspace.update_in(cx, |_, _window, cx| cx.notify());
     cx.run_until_parked();
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Thread 1"]
+        vec![
+            //
+            "v [my-project]",
+            "  Thread 1",
+        ]
     );
 
     // Collapse
     sidebar.update_in(cx, |s, window, cx| {
-        s.toggle_collapse(&path_list, window, cx);
+        s.toggle_collapse(&project_group_key, window, cx);
     });
     cx.run_until_parked();
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["> [my-project]"]
+        vec![
+            //
+            "> [my-project]",
+        ]
     );
 
     // Expand
     sidebar.update_in(cx, |s, window, cx| {
-        s.toggle_collapse(&path_list, window, cx);
+        s.toggle_collapse(&project_group_key, window, cx);
     });
     cx.run_until_parked();
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Thread 1"]
+        vec![
+            //
+            "v [my-project]",
+            "  Thread 1",
+        ]
     );
 }
 
@@ -681,7 +834,8 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
     let collapsed_path = PathList::new(&[std::path::PathBuf::from("/collapsed")]);
 
     sidebar.update_in(cx, |s, _window, _cx| {
-        s.collapsed_groups.insert(collapsed_path.clone());
+        s.collapsed_groups
+            .insert(project::ProjectGroupKey::new(None, collapsed_path.clone()));
         s.contents
             .notified_threads
             .insert(acp::SessionId::new(Arc::from("t-5")));
@@ -694,17 +848,18 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 has_running_threads: false,
                 waiting_thread_count: 0,
                 is_active: true,
+                has_threads: true,
             },
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
                     session_id: acp::SessionId::new(Arc::from("t-1")),
                     agent_id: AgentId::new("zed-agent"),
-                    folder_paths: PathList::default(),
-                    main_worktree_paths: PathList::default(),
+                    worktree_paths: ThreadWorktreePaths::default(),
                     title: "Completed thread".into(),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
+                    remote_connection: None,
                 },
                 icon: IconName::ZedAgent,
                 icon_from_external_svg: None,
@@ -722,12 +877,12 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 metadata: ThreadMetadata {
                     session_id: acp::SessionId::new(Arc::from("t-2")),
                     agent_id: AgentId::new("zed-agent"),
-                    folder_paths: PathList::default(),
-                    main_worktree_paths: PathList::default(),
+                    worktree_paths: ThreadWorktreePaths::default(),
                     title: "Running thread".into(),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
+                    remote_connection: None,
                 },
                 icon: IconName::ZedAgent,
                 icon_from_external_svg: None,
@@ -745,12 +900,12 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 metadata: ThreadMetadata {
                     session_id: acp::SessionId::new(Arc::from("t-3")),
                     agent_id: AgentId::new("zed-agent"),
-                    folder_paths: PathList::default(),
-                    main_worktree_paths: PathList::default(),
+                    worktree_paths: ThreadWorktreePaths::default(),
                     title: "Error thread".into(),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
+                    remote_connection: None,
                 },
                 icon: IconName::ZedAgent,
                 icon_from_external_svg: None,
@@ -764,16 +919,17 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 diff_stats: DiffStats::default(),
             }),
             // Thread with WaitingForConfirmation status, not active
+            // remote_connection: None,
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
                     session_id: acp::SessionId::new(Arc::from("t-4")),
                     agent_id: AgentId::new("zed-agent"),
-                    folder_paths: PathList::default(),
-                    main_worktree_paths: PathList::default(),
+                    worktree_paths: ThreadWorktreePaths::default(),
                     title: "Waiting thread".into(),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
+                    remote_connection: None,
                 },
                 icon: IconName::ZedAgent,
                 icon_from_external_svg: None,
@@ -787,16 +943,17 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 diff_stats: DiffStats::default(),
             }),
             // Background thread that completed (should show notification)
+            // remote_connection: None,
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
                     session_id: acp::SessionId::new(Arc::from("t-5")),
                     agent_id: AgentId::new("zed-agent"),
-                    folder_paths: PathList::default(),
-                    main_worktree_paths: PathList::default(),
+                    worktree_paths: ThreadWorktreePaths::default(),
                     title: "Notified thread".into(),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
+                    remote_connection: None,
                 },
                 icon: IconName::ZedAgent,
                 icon_from_external_svg: None,
@@ -822,6 +979,7 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 has_running_threads: false,
                 waiting_thread_count: 0,
                 is_active: false,
+                has_threads: false,
             },
         ];
 
@@ -832,6 +990,7 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [expanded-project]",
             "  Completed thread",
             "  Running thread * (running)  <== selected",
@@ -995,10 +1154,14 @@ async fn test_keyboard_confirm_on_project_header_toggles_collapse(cx: &mut TestA
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Thread 1"]
+        vec![
+            //
+            "v [my-project]",
+            "  Thread 1",
+        ]
     );
 
-    // Focus the sidebar and select the header (index 0)
+    // Focus the sidebar and select the header
     focus_sidebar(&sidebar, cx);
     sidebar.update_in(cx, |sidebar, _window, _cx| {
         sidebar.selection = Some(0);
@@ -1010,7 +1173,10 @@ async fn test_keyboard_confirm_on_project_header_toggles_collapse(cx: &mut TestA
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["> [my-project]  <== selected"]
+        vec![
+            //
+            "> [my-project]  <== selected",
+        ]
     );
 
     // Confirm again expands the group
@@ -1019,7 +1185,11 @@ async fn test_keyboard_confirm_on_project_header_toggles_collapse(cx: &mut TestA
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]  <== selected", "  Thread 1",]
+        vec![
+            //
+            "v [my-project]  <== selected",
+            "  Thread 1",
+        ]
     );
 }
 
@@ -1070,7 +1240,11 @@ async fn test_keyboard_expand_and_collapse_selected_entry(cx: &mut TestAppContex
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Thread 1"]
+        vec![
+            //
+            "v [my-project]",
+            "  Thread 1",
+        ]
     );
 
     // Focus sidebar and manually select the header (index 0). Press left to collapse.
@@ -1084,7 +1258,10 @@ async fn test_keyboard_expand_and_collapse_selected_entry(cx: &mut TestAppContex
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["> [my-project]  <== selected"]
+        vec![
+            //
+            "> [my-project]  <== selected",
+        ]
     );
 
     // Press right to expand
@@ -1093,7 +1270,11 @@ async fn test_keyboard_expand_and_collapse_selected_entry(cx: &mut TestAppContex
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]  <== selected", "  Thread 1",]
+        vec![
+            //
+            "v [my-project]  <== selected",
+            "  Thread 1",
+        ]
     );
 
     // Press right again on already-expanded header moves selection down
@@ -1120,7 +1301,11 @@ async fn test_keyboard_collapse_from_child_selects_parent(cx: &mut TestAppContex
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Thread 1  <== selected",]
+        vec![
+            //
+            "v [my-project]",
+            "  Thread 1  <== selected",
+        ]
     );
 
     // Pressing left on a child collapses the parent group and selects it
@@ -1130,7 +1315,10 @@ async fn test_keyboard_collapse_from_child_selects_parent(cx: &mut TestAppContex
     assert_eq!(sidebar.read_with(cx, |s, _| s.selection), Some(0));
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["> [my-project]  <== selected"]
+        vec![
+            //
+            "> [my-project]  <== selected",
+        ]
     );
 }
 
@@ -1141,10 +1329,13 @@ async fn test_keyboard_navigation_on_empty_list(cx: &mut TestAppContext) {
         cx.add_window_view(|window, cx| MultiWorkspace::test_new(project, window, cx));
     let sidebar = setup_sidebar(&multi_workspace, cx);
 
-    // An empty project has the header and a new thread button.
+    // An empty project has only the header.
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [empty-project]", "  [+ New Thread]"]
+        vec![
+            //
+            "v [empty-project]",
+        ]
     );
 
     // Focus sidebar — focus_in does not set a selection
@@ -1155,11 +1346,7 @@ async fn test_keyboard_navigation_on_empty_list(cx: &mut TestAppContext) {
     cx.dispatch_action(SelectNext);
     assert_eq!(sidebar.read_with(cx, |s, _| s.selection), Some(0));
 
-    // SelectNext moves to the new thread button
-    cx.dispatch_action(SelectNext);
-    assert_eq!(sidebar.read_with(cx, |s, _| s.selection), Some(1));
-
-    // At the end, wraps back to first entry
+    // At the end (only one entry), wraps back to first entry
     cx.dispatch_action(SelectNext);
     assert_eq!(sidebar.read_with(cx, |s, _| s.selection), Some(0));
 
@@ -1280,10 +1467,69 @@ async fn test_parallel_threads_shown_with_live_status(cx: &mut TestAppContext) {
     entries[1..].sort();
     assert_eq!(
         entries,
-        vec!["v [my-project]", "  Hello *", "  Hello * (running)",]
+        vec![
+            //
+            "v [my-project]",
+            "  Hello * (active)",
+            "  Hello * (running)",
+        ]
     );
 }
 
+#[gpui::test]
+async fn test_subagent_permission_request_marks_parent_sidebar_thread_waiting(
+    cx: &mut TestAppContext,
+) {
+    let project = init_test_project_with_agent_panel("/my-project", cx).await;
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+    let (sidebar, panel) = setup_sidebar_with_agent_panel(&multi_workspace, cx);
+
+    let connection = StubAgentConnection::new().with_supports_load_session(true);
+    connection.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
+        acp::ContentChunk::new("Done".into()),
+    )]);
+    open_thread_with_connection(&panel, connection, cx);
+    send_message(&panel, cx);
+
+    let parent_session_id = active_session_id(&panel, cx);
+    save_test_thread_metadata(&parent_session_id, &project, cx).await;
+
+    let subagent_session_id = acp::SessionId::new("subagent-session");
+    cx.update(|_, cx| {
+        let parent_thread = panel.read(cx).active_agent_thread(cx).unwrap();
+        parent_thread.update(cx, |thread: &mut AcpThread, cx| {
+            thread.subagent_spawned(subagent_session_id.clone(), cx);
+        });
+    });
+    cx.run_until_parked();
+
+    let subagent_thread = panel.read_with(cx, |panel, cx| {
+        panel
+            .active_conversation_view()
+            .and_then(|conversation| conversation.read(cx).thread_view(&subagent_session_id))
+            .map(|thread_view| thread_view.read(cx).thread.clone())
+            .expect("Expected subagent thread to be loaded into the conversation")
+    });
+    request_test_tool_authorization(&subagent_thread, "subagent-tool-call", "allow-subagent", cx);
+
+    let parent_status = sidebar.read_with(cx, |sidebar, _cx| {
+        sidebar
+            .contents
+            .entries
+            .iter()
+            .find_map(|entry| match entry {
+                ListEntry::Thread(thread) if thread.metadata.session_id == parent_session_id => {
+                    Some(thread.status)
+                }
+                _ => None,
+            })
+            .expect("Expected parent thread entry in sidebar")
+    });
+
+    assert_eq!(parent_status, AgentThreadStatus::WaitingForConfirmation);
+}
+
 #[gpui::test]
 async fn test_background_thread_completion_triggers_notification(cx: &mut TestAppContext) {
     let project_a = init_test_project_with_agent_panel("/project-a", cx).await;
@@ -1319,7 +1565,11 @@ async fn test_background_thread_completion_triggers_notification(cx: &mut TestAp
     // Thread A is still running; no notification yet.
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [project-a]", "  Hello * (running)",]
+        vec![
+            //
+            "v [project-a]",
+            "  Hello * (running) (active)",
+        ]
     );
 
     // Complete thread A's turn (transition Running → Completed).
@@ -1329,7 +1579,11 @@ async fn test_background_thread_completion_triggers_notification(cx: &mut TestAp
     // The completed background thread shows a notification indicator.
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [project-a]", "  Hello * (!)",]
+        vec![
+            //
+            "v [project-a]",
+            "  Hello * (!) (active)",
+        ]
     );
 }
 
@@ -1369,6 +1623,7 @@ async fn test_search_narrows_visible_threads_to_matches(cx: &mut TestAppContext)
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix crash in project panel",
             "  Add inline diff view",
@@ -1381,7 +1636,11 @@ async fn test_search_narrows_visible_threads_to_matches(cx: &mut TestAppContext)
     type_in_search(&sidebar, "diff", cx);
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Add inline diff view  <== selected",]
+        vec![
+            //
+            "v [my-project]",
+            "  Add inline diff view  <== selected",
+        ]
     );
 
     // User changes query to something with no matches — list is empty.
@@ -1416,6 +1675,7 @@ async fn test_search_matches_regardless_of_case(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix Crash In Project Panel  <== selected",
         ]
@@ -1426,6 +1686,7 @@ async fn test_search_matches_regardless_of_case(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix Crash In Project Panel  <== selected",
         ]
@@ -1456,7 +1717,12 @@ async fn test_escape_clears_search_and_restores_full_list(cx: &mut TestAppContex
     // Confirm the full list is showing.
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Alpha thread", "  Beta thread",]
+        vec![
+            //
+            "v [my-project]",
+            "  Alpha thread",
+            "  Beta thread",
+        ]
     );
 
     // User types a search query to filter down.
@@ -1464,7 +1730,11 @@ async fn test_escape_clears_search_and_restores_full_list(cx: &mut TestAppContex
     type_in_search(&sidebar, "alpha", cx);
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [my-project]", "  Alpha thread  <== selected",]
+        vec![
+            //
+            "v [my-project]",
+            "  Alpha thread  <== selected",
+        ]
     );
 
     // User presses Escape — filter clears, full list is restored.
@@ -1474,6 +1744,7 @@ async fn test_escape_clears_search_and_restores_full_list(cx: &mut TestAppContex
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Alpha thread  <== selected",
             "  Beta thread",
@@ -1530,6 +1801,7 @@ async fn test_search_only_shows_workspace_headers_with_matches(cx: &mut TestAppC
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [project-a]",
             "  Fix bug in sidebar",
             "  Add tests for editor",
@@ -1540,7 +1812,11 @@ async fn test_search_only_shows_workspace_headers_with_matches(cx: &mut TestAppC
     type_in_search(&sidebar, "sidebar", cx);
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [project-a]", "  Fix bug in sidebar  <== selected",]
+        vec![
+            //
+            "v [project-a]",
+            "  Fix bug in sidebar  <== selected",
+        ]
     );
 
     // "typo" only matches in the second workspace — the first header disappears.
@@ -1556,6 +1832,7 @@ async fn test_search_only_shows_workspace_headers_with_matches(cx: &mut TestAppC
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [project-a]",
             "  Fix bug in sidebar  <== selected",
             "  Add tests for editor",
@@ -1615,6 +1892,7 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [alpha-project]",
             "  Fix bug in sidebar  <== selected",
             "  Add tests for editor",
@@ -1626,7 +1904,11 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     type_in_search(&sidebar, "sidebar", cx);
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [alpha-project]", "  Fix bug in sidebar  <== selected",]
+        vec![
+            //
+            "v [alpha-project]",
+            "  Fix bug in sidebar  <== selected",
+        ]
     );
 
     // "alpha sidebar" matches the workspace name "alpha-project" (fuzzy: a-l-p-h-a-s-i-d-e-b-a-r
@@ -1636,7 +1918,11 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     type_in_search(&sidebar, "fix", cx);
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["v [alpha-project]", "  Fix bug in sidebar  <== selected",]
+        vec![
+            //
+            "v [alpha-project]",
+            "  Fix bug in sidebar  <== selected",
+        ]
     );
 
     // A query that matches a workspace name AND a thread in that same workspace.
@@ -1645,6 +1931,7 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [alpha-project]",
             "  Fix bug in sidebar  <== selected",
             "  Add tests for editor",
@@ -1658,6 +1945,7 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [alpha-project]",
             "  Fix bug in sidebar  <== selected",
             "  Add tests for editor",
@@ -1707,7 +1995,11 @@ async fn test_search_finds_threads_hidden_behind_view_more(cx: &mut TestAppConte
     let filtered = visible_entries_as_strings(&sidebar, cx);
     assert_eq!(
         filtered,
-        vec!["v [my-project]", "  Hidden gem thread  <== selected",]
+        vec![
+            //
+            "v [my-project]",
+            "  Hidden gem thread  <== selected",
+        ]
     );
     assert!(
         !filtered.iter().any(|e| e.contains("View More")),
@@ -1743,14 +2035,21 @@ async fn test_search_finds_threads_inside_collapsed_groups(cx: &mut TestAppConte
 
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["> [my-project]  <== selected"]
+        vec![
+            //
+            "> [my-project]  <== selected",
+        ]
     );
 
     // User types a search — the thread appears even though its group is collapsed.
     type_in_search(&sidebar, "important", cx);
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
-        vec!["> [my-project]", "  Important thread  <== selected",]
+        vec![
+            //
+            "> [my-project]",
+            "  Important thread  <== selected",
+        ]
     );
 }
 
@@ -1784,6 +2083,7 @@ async fn test_search_then_keyboard_navigate_and_confirm(cx: &mut TestAppContext)
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix crash in panel  <== selected",
             "  Fix lint warnings",
@@ -1796,6 +2096,7 @@ async fn test_search_then_keyboard_navigate_and_confirm(cx: &mut TestAppContext)
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix crash in panel",
             "  Fix lint warnings  <== selected",
@@ -1807,6 +2108,7 @@ async fn test_search_then_keyboard_navigate_and_confirm(cx: &mut TestAppContext)
     assert_eq!(
         visible_entries_as_strings(&sidebar, cx),
         vec![
+            //
             "v [my-project]",
             "  Fix crash in panel  <== selected",
             "  Fix lint warnings",

crates/ui/src/components/ai/thread_item.rs 🔗

@@ -54,6 +54,7 @@ pub struct ThreadItem {
     project_paths: Option<Arc<[PathBuf]>>,
     project_name: Option<SharedString>,
     worktrees: Vec<ThreadItemWorktreeInfo>,
+    is_remote: bool,
     on_click: Option<Box<dyn Fn(&ClickEvent, &mut Window, &mut App) + 'static>>,
     on_hover: Box<dyn Fn(&bool, &mut Window, &mut App) + 'static>,
     action_slot: Option<AnyElement>,
@@ -86,6 +87,7 @@ impl ThreadItem {
             project_paths: None,
             project_name: None,
             worktrees: Vec::new(),
+            is_remote: false,
             on_click: None,
             on_hover: Box::new(|_, _, _| {}),
             action_slot: None,
@@ -179,6 +181,11 @@ impl ThreadItem {
         self
     }
 
+    pub fn is_remote(mut self, is_remote: bool) -> Self {
+        self.is_remote = is_remote;
+        self
+    }
+
     pub fn hovered(mut self, hovered: bool) -> Self {
         self.hovered = hovered;
         self
@@ -443,10 +450,11 @@ impl RenderOnce for ThreadItem {
                         .join("\n")
                         .into();
 
-                    let worktree_tooltip_title = if self.worktrees.len() > 1 {
-                        "Thread Running in Local Git Worktrees"
-                    } else {
-                        "Thread Running in a Local Git Worktree"
+                    let worktree_tooltip_title = match (self.is_remote, self.worktrees.len() > 1) {
+                        (true, true) => "Thread Running in Remote Git Worktrees",
+                        (true, false) => "Thread Running in a Remote Git Worktree",
+                        (false, true) => "Thread Running in Local Git Worktrees",
+                        (false, false) => "Thread Running in a Local Git Worktree",
                     };
 
                     // Deduplicate chips by name — e.g. two paths both named

crates/util/src/disambiguate.rs 🔗

@@ -0,0 +1,202 @@
+use std::collections::HashMap;
+use std::hash::Hash;
+
+/// Computes the minimum detail level needed for each item so that no two items
+/// share the same description. Items whose descriptions are unique at level 0
+/// stay at 0; items that collide get their detail level incremented until either
+/// the collision is resolved or increasing the level no longer changes the
+/// description (preventing infinite loops for truly identical items).
+///
+/// The `get_description` closure must return a sequence that eventually reaches
+/// a "fixed point" where increasing `detail` no longer changes the output. If
+/// an item reaches its fixed point, it is assumed it will no longer change and
+/// will no longer be checked for collisions.
+pub fn compute_disambiguation_details<T, D>(
+    items: &[T],
+    get_description: impl Fn(&T, usize) -> D,
+) -> Vec<usize>
+where
+    D: Eq + Hash + Clone,
+{
+    let mut details = vec![0usize; items.len()];
+    let mut descriptions: HashMap<D, Vec<usize>> = HashMap::default();
+    let mut current_descriptions: Vec<D> =
+        items.iter().map(|item| get_description(item, 0)).collect();
+
+    loop {
+        let mut any_collisions = false;
+
+        for (index, (item, &detail)) in items.iter().zip(&details).enumerate() {
+            if detail > 0 {
+                let new_description = get_description(item, detail);
+                if new_description == current_descriptions[index] {
+                    continue;
+                }
+                current_descriptions[index] = new_description;
+            }
+            descriptions
+                .entry(current_descriptions[index].clone())
+                .or_insert_with(Vec::new)
+                .push(index);
+        }
+
+        for (_, indices) in descriptions.drain() {
+            if indices.len() > 1 {
+                any_collisions = true;
+                for index in indices {
+                    details[index] += 1;
+                }
+            }
+        }
+
+        if !any_collisions {
+            break;
+        }
+    }
+
+    details
+}
+
+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn test_no_conflicts() {
+        let items = vec!["alpha", "beta", "gamma"];
+        let details = compute_disambiguation_details(&items, |item, _detail| item.to_string());
+        assert_eq!(details, vec![0, 0, 0]);
+    }
+
+    #[test]
+    fn test_simple_two_way_conflict() {
+        // Two items with the same base name but different parents.
+        let items = vec![("src/foo.rs", "foo.rs"), ("lib/foo.rs", "foo.rs")];
+        let details = compute_disambiguation_details(&items, |item, detail| match detail {
+            0 => item.1.to_string(),
+            _ => item.0.to_string(),
+        });
+        assert_eq!(details, vec![1, 1]);
+    }
+
+    #[test]
+    fn test_three_way_conflict() {
+        let items = vec![
+            ("foo.rs", "a/foo.rs"),
+            ("foo.rs", "b/foo.rs"),
+            ("foo.rs", "c/foo.rs"),
+        ];
+        let details = compute_disambiguation_details(&items, |item, detail| match detail {
+            0 => item.0.to_string(),
+            _ => item.1.to_string(),
+        });
+        assert_eq!(details, vec![1, 1, 1]);
+    }
+
+    #[test]
+    fn test_deeper_conflict() {
+        // At detail 0, all three show "file.rs".
+        // At detail 1, items 0 and 1 both show "src/file.rs", item 2 shows "lib/file.rs".
+        // At detail 2, item 0 shows "a/src/file.rs", item 1 shows "b/src/file.rs".
+        let items = vec![
+            vec!["file.rs", "src/file.rs", "a/src/file.rs"],
+            vec!["file.rs", "src/file.rs", "b/src/file.rs"],
+            vec!["file.rs", "lib/file.rs", "x/lib/file.rs"],
+        ];
+        let details = compute_disambiguation_details(&items, |item, detail| {
+            let clamped = detail.min(item.len() - 1);
+            item[clamped].to_string()
+        });
+        assert_eq!(details, vec![2, 2, 1]);
+    }
+
+    #[test]
+    fn test_mixed_conflicting_and_unique() {
+        let items = vec![
+            ("src/foo.rs", "foo.rs"),
+            ("lib/foo.rs", "foo.rs"),
+            ("src/bar.rs", "bar.rs"),
+        ];
+        let details = compute_disambiguation_details(&items, |item, detail| match detail {
+            0 => item.1.to_string(),
+            _ => item.0.to_string(),
+        });
+        assert_eq!(details, vec![1, 1, 0]);
+    }
+
+    #[test]
+    fn test_identical_items_terminates() {
+        // All items return the same description at every detail level.
+        // The algorithm must terminate rather than looping forever.
+        let items = vec!["same", "same", "same"];
+        let details = compute_disambiguation_details(&items, |item, _detail| item.to_string());
+        // After bumping to 1, the description doesn't change from level 0,
+        // so the items are skipped and the loop terminates.
+        assert_eq!(details, vec![1, 1, 1]);
+    }
+
+    #[test]
+    fn test_single_item() {
+        let items = vec!["only"];
+        let details = compute_disambiguation_details(&items, |item, _detail| item.to_string());
+        assert_eq!(details, vec![0]);
+    }
+
+    #[test]
+    fn test_empty_input() {
+        let items: Vec<&str> = vec![];
+        let details = compute_disambiguation_details(&items, |item, _detail| item.to_string());
+        let expected: Vec<usize> = vec![];
+        assert_eq!(details, expected);
+    }
+
+    #[test]
+    fn test_duplicate_paths_from_multiple_groups() {
+        use std::path::Path;
+
+        // Simulates the sidebar scenario: a path like /Users/rtfeldman/code/zed
+        // appears in two project groups (e.g. "zed" alone and "zed, roc").
+        // After deduplication, only unique paths should be disambiguated.
+        //
+        // Paths:
+        //   /Users/rtfeldman/code/worktrees/zed/focal-arrow/zed  (group 1)
+        //   /Users/rtfeldman/code/zed                             (group 2)
+        //   /Users/rtfeldman/code/zed                             (group 3, same path as group 2)
+        //   /Users/rtfeldman/code/roc                             (group 3)
+        //
+        // A naive flat_map collects duplicates. The duplicate /code/zed entries
+        // collide with each other and drive the detail to the full path.
+        // The fix is to deduplicate before disambiguating.
+
+        fn path_suffix(path: &Path, detail: usize) -> String {
+            let mut components: Vec<_> = path
+                .components()
+                .rev()
+                .filter_map(|c| match c {
+                    std::path::Component::Normal(s) => Some(s.to_string_lossy()),
+                    _ => None,
+                })
+                .take(detail + 1)
+                .collect();
+            components.reverse();
+            components.join("/")
+        }
+
+        let all_paths: Vec<&Path> = vec![
+            Path::new("/Users/rtfeldman/code/worktrees/zed/focal-arrow/zed"),
+            Path::new("/Users/rtfeldman/code/zed"),
+            Path::new("/Users/rtfeldman/code/roc"),
+        ];
+
+        let details =
+            compute_disambiguation_details(&all_paths, |path, detail| path_suffix(path, detail));
+
+        // focal-arrow/zed and code/zed both end in "zed", so they need detail 1.
+        // "roc" is unique at detail 0.
+        assert_eq!(details, vec![1, 1, 0]);
+
+        assert_eq!(path_suffix(all_paths[0], details[0]), "focal-arrow/zed");
+        assert_eq!(path_suffix(all_paths[1], details[1]), "code/zed");
+        assert_eq!(path_suffix(all_paths[2], details[2]), "roc");
+    }
+}

crates/util/src/util.rs 🔗

@@ -1,5 +1,6 @@
 pub mod archive;
 pub mod command;
+pub mod disambiguate;
 pub mod fs;
 pub mod markdown;
 pub mod path_list;

crates/workspace/src/multi_workspace.rs 🔗

@@ -6,8 +6,10 @@ use gpui::{
     actions, deferred, px,
 };
 use project::{DirectoryLister, DisableAiSettings, Project, ProjectGroupKey};
+use remote::RemoteConnectionOptions;
 use settings::Settings;
 pub use settings::SidebarSide;
+use std::collections::{HashMap, HashSet};
 use std::future::Future;
 use std::path::Path;
 use std::path::PathBuf;
@@ -22,6 +24,7 @@ use ui::{ContextMenu, right_click_menu};
 
 const SIDEBAR_RESIZE_HANDLE_SIZE: Pixels = px(6.0);
 
+use crate::open_remote_project_with_existing_connection;
 use crate::{
     CloseIntent, CloseWindow, DockPosition, Event as WorkspaceEvent, Item, ModalView, OpenMode,
     Panel, Workspace, WorkspaceId, client_side_decorations,
@@ -98,6 +101,14 @@ pub enum MultiWorkspaceEvent {
     ActiveWorkspaceChanged,
     WorkspaceAdded(Entity<Workspace>),
     WorkspaceRemoved(EntityId),
+    WorktreePathAdded {
+        old_main_paths: PathList,
+        added_path: PathBuf,
+    },
+    WorktreePathRemoved {
+        old_main_paths: PathList,
+        removed_path: PathBuf,
+    },
 }
 
 pub enum SidebarEvent {
@@ -299,6 +310,7 @@ pub struct MultiWorkspace {
     workspaces: Vec<Entity<Workspace>>,
     active_workspace: ActiveWorkspace,
     project_group_keys: Vec<ProjectGroupKey>,
+    workspace_group_keys: HashMap<EntityId, ProjectGroupKey>,
     sidebar: Option<Box<dyn SidebarHandle>>,
     sidebar_open: bool,
     sidebar_overlay: Option<AnyView>,
@@ -351,6 +363,7 @@ impl MultiWorkspace {
         Self {
             window_id: window.window_handle().window_id(),
             project_group_keys: Vec::new(),
+            workspace_group_keys: HashMap::default(),
             workspaces: Vec::new(),
             active_workspace: ActiveWorkspace::Transient(workspace),
             sidebar: None,
@@ -491,7 +504,15 @@ impl MultiWorkspace {
                 workspace.set_sidebar_focus_handle(None);
             });
         }
-        self.restore_previous_focus(true, window, cx);
+        let sidebar_has_focus = self
+            .sidebar
+            .as_ref()
+            .is_some_and(|s| s.focus_handle(cx).contains_focused(window, cx));
+        if sidebar_has_focus {
+            self.restore_previous_focus(true, window, cx);
+        } else {
+            self.previous_focus_handle.take();
+        }
         self.serialize(cx);
         cx.notify();
     }
@@ -546,9 +567,11 @@ impl MultiWorkspace {
         cx.subscribe_in(&project, window, {
             let workspace = workspace.downgrade();
             move |this, _project, event, _window, cx| match event {
-                project::Event::WorktreeAdded(_) | project::Event::WorktreeRemoved(_) => {
+                project::Event::WorktreeAdded(_)
+                | project::Event::WorktreeRemoved(_)
+                | project::Event::WorktreeUpdatedRootRepoCommonDir(_) => {
                     if let Some(workspace) = workspace.upgrade() {
-                        this.add_project_group_key(workspace.read(cx).project_group_key(cx));
+                        this.handle_workspace_key_change(&workspace, cx);
                     }
                 }
                 _ => {}
@@ -564,7 +587,124 @@ impl MultiWorkspace {
         .detach();
     }
 
-    pub fn add_project_group_key(&mut self, project_group_key: ProjectGroupKey) {
+    fn handle_workspace_key_change(
+        &mut self,
+        workspace: &Entity<Workspace>,
+        cx: &mut Context<Self>,
+    ) {
+        let workspace_id = workspace.entity_id();
+        let old_key = self.project_group_key_for_workspace(workspace, cx);
+        let new_key = workspace.read(cx).project_group_key(cx);
+
+        if new_key.path_list().paths().is_empty() || old_key == new_key {
+            return;
+        }
+
+        let active_workspace = self.workspace().clone();
+
+        self.set_workspace_group_key(workspace, new_key.clone());
+
+        let changed_root_paths = workspace.read(cx).root_paths(cx);
+        let old_paths = old_key.path_list().paths();
+        let new_paths = new_key.path_list().paths();
+
+        // Remove workspaces that already had the new key and have the same
+        // root paths (true duplicates that this workspace is replacing).
+        //
+        // NOTE: These are dropped without prompting for unsaved changes because
+        // the user explicitly added a folder that makes this workspace
+        // identical to the duplicate — they are intentionally overwriting it.
+        let duplicate_workspaces: Vec<Entity<Workspace>> = self
+            .workspaces
+            .iter()
+            .filter(|ws| {
+                ws.entity_id() != workspace_id
+                    && self.project_group_key_for_workspace(ws, cx) == new_key
+                    && ws.read(cx).root_paths(cx) == changed_root_paths
+            })
+            .cloned()
+            .collect();
+
+        if duplicate_workspaces.contains(&active_workspace) {
+            // The active workspace is among the duplicates — drop the
+            // incoming workspace instead so the user stays where they are.
+            self.detach_workspace(workspace, cx);
+            self.workspaces.retain(|w| w != workspace);
+        } else {
+            for ws in &duplicate_workspaces {
+                self.detach_workspace(ws, cx);
+                self.workspaces.retain(|w| w != ws);
+            }
+        }
+
+        // Propagate folder adds/removes to linked worktree siblings
+        // (different root paths, same old key) so they stay in the group.
+        let group_workspaces: Vec<Entity<Workspace>> = self
+            .workspaces
+            .iter()
+            .filter(|ws| {
+                ws.entity_id() != workspace_id
+                    && self.project_group_key_for_workspace(ws, cx) == old_key
+            })
+            .cloned()
+            .collect();
+
+        for workspace in &group_workspaces {
+            // Pre-set this to stop later WorktreeAdded events from triggering
+            self.set_workspace_group_key(&workspace, new_key.clone());
+
+            let project = workspace.read(cx).project().clone();
+
+            for added_path in new_paths.iter().filter(|p| !old_paths.contains(p)) {
+                project
+                    .update(cx, |project, cx| {
+                        project.find_or_create_worktree(added_path, true, cx)
+                    })
+                    .detach_and_log_err(cx);
+            }
+
+            for removed_path in old_paths.iter().filter(|p| !new_paths.contains(p)) {
+                project.update(cx, |project, cx| {
+                    project.remove_worktree_for_main_worktree_path(removed_path, cx);
+                });
+            }
+        }
+
+        // Restore the active workspace after removals may have shifted
+        // the index. If the previously active workspace was removed,
+        // fall back to the workspace whose key just changed.
+        if let ActiveWorkspace::Persistent(_) = &self.active_workspace {
+            let target = if self.workspaces.contains(&active_workspace) {
+                &active_workspace
+            } else {
+                workspace
+            };
+            if let Some(new_index) = self.workspaces.iter().position(|ws| ws == target) {
+                self.active_workspace = ActiveWorkspace::Persistent(new_index);
+            }
+        }
+
+        self.remove_stale_project_group_keys(cx);
+
+        let old_main_paths = old_key.path_list().clone();
+        for added_path in new_paths.iter().filter(|p| !old_paths.contains(p)) {
+            cx.emit(MultiWorkspaceEvent::WorktreePathAdded {
+                old_main_paths: old_main_paths.clone(),
+                added_path: added_path.clone(),
+            });
+        }
+        for removed_path in old_paths.iter().filter(|p| !new_paths.contains(p)) {
+            cx.emit(MultiWorkspaceEvent::WorktreePathRemoved {
+                old_main_paths: old_main_paths.clone(),
+                removed_path: removed_path.clone(),
+            });
+        }
+
+        self.serialize(cx);
+        cx.notify();
+    }
+
+    fn add_project_group_key(&mut self, project_group_key: ProjectGroupKey) {
         if project_group_key.path_list().paths().is_empty() {
             return;
         }
@@ -575,9 +715,43 @@ impl MultiWorkspace {
         self.project_group_keys.insert(0, project_group_key);
     }
 
+    pub(crate) fn set_workspace_group_key(
+        &mut self,
+        workspace: &Entity<Workspace>,
+        project_group_key: ProjectGroupKey,
+    ) {
+        self.workspace_group_keys
+            .insert(workspace.entity_id(), project_group_key.clone());
+        self.add_project_group_key(project_group_key);
+    }
+
+    pub fn project_group_key_for_workspace(
+        &self,
+        workspace: &Entity<Workspace>,
+        cx: &App,
+    ) -> ProjectGroupKey {
+        self.workspace_group_keys
+            .get(&workspace.entity_id())
+            .cloned()
+            .unwrap_or_else(|| workspace.read(cx).project_group_key(cx))
+    }
+
+    fn remove_stale_project_group_keys(&mut self, cx: &App) {
+        let workspace_keys: HashSet<ProjectGroupKey> = self
+            .workspaces
+            .iter()
+            .map(|workspace| self.project_group_key_for_workspace(workspace, cx))
+            .collect();
+        self.project_group_keys
+            .retain(|key| workspace_keys.contains(key));
+    }
+
     pub fn restore_project_group_keys(&mut self, keys: Vec<ProjectGroupKey>) {
         let mut restored: Vec<ProjectGroupKey> = Vec::with_capacity(keys.len());
         for key in keys {
+            if key.path_list().paths().is_empty() {
+                continue;
+            }
             if !restored.contains(&key) {
                 restored.push(key);
             }
@@ -605,7 +779,7 @@ impl MultiWorkspace {
             .map(|key| (key.clone(), Vec::new()))
             .collect::<Vec<_>>();
         for workspace in &self.workspaces {
-            let key = workspace.read(cx).project_group_key(cx);
+            let key = self.project_group_key_for_workspace(workspace, cx);
             if let Some((_, workspaces)) = groups.iter_mut().find(|(k, _)| k == &key) {
                 workspaces.push(workspace.clone());
             }
@@ -618,9 +792,9 @@ impl MultiWorkspace {
         project_group_key: &ProjectGroupKey,
         cx: &App,
     ) -> impl Iterator<Item = &Entity<Workspace>> {
-        self.workspaces
-            .iter()
-            .filter(move |ws| ws.read(cx).project_group_key(cx) == *project_group_key)
+        self.workspaces.iter().filter(move |workspace| {
+            self.project_group_key_for_workspace(workspace, cx) == *project_group_key
+        })
     }
 
     pub fn remove_folder_from_project_group(
@@ -781,14 +955,104 @@ impl MultiWorkspace {
         )
     }
 
-    /// Finds an existing workspace whose root paths exactly match the given path list.
-    pub fn workspace_for_paths(&self, path_list: &PathList, cx: &App) -> Option<Entity<Workspace>> {
+    /// Finds an existing workspace whose root paths and host exactly match.
+    pub fn workspace_for_paths(
+        &self,
+        path_list: &PathList,
+        host: Option<&RemoteConnectionOptions>,
+        cx: &App,
+    ) -> Option<Entity<Workspace>> {
         self.workspaces
             .iter()
-            .find(|ws| PathList::new(&ws.read(cx).root_paths(cx)) == *path_list)
+            .find(|ws| {
+                let key = ws.read(cx).project_group_key(cx);
+                key.host().as_ref() == host
+                    && PathList::new(&ws.read(cx).root_paths(cx)) == *path_list
+            })
             .cloned()
     }
 
+    /// Finds an existing workspace whose paths match, or creates a new one.
+    ///
+    /// For local projects (`host` is `None`), this delegates to
+    /// [`Self::find_or_create_local_workspace`]. For remote projects, it
+    /// tries an exact path match and, if no existing workspace is found,
+    /// calls `connect_remote` to establish a connection and creates a new
+    /// remote workspace.
+    ///
+    /// The `connect_remote` closure is responsible for any user-facing
+    /// connection UI (e.g. password prompts). It receives the connection
+    /// options and should return a [`Task`] that resolves to the
+    /// [`RemoteClient`] session, or `None` if the connection was
+    /// cancelled.
+    pub fn find_or_create_workspace(
+        &mut self,
+        paths: PathList,
+        host: Option<RemoteConnectionOptions>,
+        provisional_project_group_key: Option<ProjectGroupKey>,
+        connect_remote: impl FnOnce(
+            RemoteConnectionOptions,
+            &mut Window,
+            &mut Context<Self>,
+        ) -> Task<Result<Option<Entity<remote::RemoteClient>>>>
+        + 'static,
+        window: &mut Window,
+        cx: &mut Context<Self>,
+    ) -> Task<Result<Entity<Workspace>>> {
+        if let Some(workspace) = self.workspace_for_paths(&paths, host.as_ref(), cx) {
+            self.activate(workspace.clone(), window, cx);
+            return Task::ready(Ok(workspace));
+        }
+
+        let Some(connection_options) = host else {
+            return self.find_or_create_local_workspace(paths, window, cx);
+        };
+
+        let app_state = self.workspace().read(cx).app_state().clone();
+        let window_handle = window.window_handle().downcast::<MultiWorkspace>();
+        let connect_task = connect_remote(connection_options.clone(), window, cx);
+        let paths_vec = paths.paths().to_vec();
+
+        cx.spawn(async move |_this, cx| {
+            let session = connect_task
+                .await?
+                .ok_or_else(|| anyhow::anyhow!("Remote connection was cancelled"))?;
+
+            let new_project = cx.update(|cx| {
+                Project::remote(
+                    session,
+                    app_state.client.clone(),
+                    app_state.node_runtime.clone(),
+                    app_state.user_store.clone(),
+                    app_state.languages.clone(),
+                    app_state.fs.clone(),
+                    true,
+                    cx,
+                )
+            });
+
+            let window_handle =
+                window_handle.ok_or_else(|| anyhow::anyhow!("Window is not a MultiWorkspace"))?;
+
+            open_remote_project_with_existing_connection(
+                connection_options,
+                new_project,
+                paths_vec,
+                app_state,
+                window_handle,
+                provisional_project_group_key,
+                cx,
+            )
+            .await?;
+
+            window_handle.update(cx, |multi_workspace, window, cx| {
+                let workspace = multi_workspace.workspace().clone();
+                multi_workspace.add(workspace.clone(), window, cx);
+                workspace
+            })
+        })
+    }
+
     /// Finds an existing workspace in this multi-workspace whose paths match,
     /// or creates a new one (deserializing its saved state from the database).
     /// Never searches other windows or matches workspaces with a superset of
@@ -799,7 +1063,7 @@ impl MultiWorkspace {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) -> Task<Result<Entity<Workspace>>> {
-        if let Some(workspace) = self.workspace_for_paths(&path_list, cx) {
+        if let Some(workspace) = self.workspace_for_paths(&path_list, None, cx) {
             self.activate(workspace.clone(), window, cx);
             return Task::ready(Ok(workspace));
         }
@@ -882,7 +1146,6 @@ impl MultiWorkspace {
                     self.promote_transient(old, cx);
                 } else {
                     self.detach_workspace(&old, cx);
-                    cx.emit(MultiWorkspaceEvent::WorkspaceRemoved(old.entity_id()));
                 }
             }
         } else {
@@ -893,7 +1156,6 @@ impl MultiWorkspace {
             });
             if let Some(old) = self.active_workspace.set_transient(workspace) {
                 self.detach_workspace(&old, cx);
-                cx.emit(MultiWorkspaceEvent::WorkspaceRemoved(old.entity_id()));
             }
         }
 
@@ -919,8 +1181,8 @@ impl MultiWorkspace {
     /// Promotes a former transient workspace into the persistent list.
     /// Returns the index of the newly inserted workspace.
     fn promote_transient(&mut self, workspace: Entity<Workspace>, cx: &mut Context<Self>) -> usize {
-        let project_group_key = workspace.read(cx).project().read(cx).project_group_key(cx);
-        self.add_project_group_key(project_group_key);
+        let project_group_key = self.project_group_key_for_workspace(&workspace, cx);
+        self.set_workspace_group_key(&workspace, project_group_key);
         self.workspaces.push(workspace.clone());
         cx.emit(MultiWorkspaceEvent::WorkspaceAdded(workspace));
         self.workspaces.len() - 1
@@ -936,10 +1198,10 @@ impl MultiWorkspace {
         for workspace in std::mem::take(&mut self.workspaces) {
             if workspace != active {
                 self.detach_workspace(&workspace, cx);
-                cx.emit(MultiWorkspaceEvent::WorkspaceRemoved(workspace.entity_id()));
             }
         }
         self.project_group_keys.clear();
+        self.workspace_group_keys.clear();
         self.active_workspace = ActiveWorkspace::Transient(active);
         cx.notify();
     }
@@ -956,7 +1218,7 @@ impl MultiWorkspace {
         if let Some(index) = self.workspaces.iter().position(|w| *w == workspace) {
             index
         } else {
-            let project_group_key = workspace.read(cx).project().read(cx).project_group_key(cx);
+            let project_group_key = self.project_group_key_for_workspace(&workspace, cx);
 
             Self::subscribe_to_workspace(&workspace, window, cx);
             self.sync_sidebar_to_workspace(&workspace, cx);
@@ -965,7 +1227,7 @@ impl MultiWorkspace {
                 workspace.set_multi_workspace(weak_self, cx);
             });
 
-            self.add_project_group_key(project_group_key);
+            self.set_workspace_group_key(&workspace, project_group_key);
             self.workspaces.push(workspace.clone());
             cx.emit(MultiWorkspaceEvent::WorkspaceAdded(workspace));
             cx.notify();
@@ -973,10 +1235,12 @@ impl MultiWorkspace {
         }
     }
 
-    /// Clears session state and DB binding for a workspace that is being
-    /// removed or replaced. The DB row is preserved so the workspace still
-    /// appears in the recent-projects list.
+    /// Detaches a workspace: clears session state, DB binding, cached
+    /// group key, and emits `WorkspaceRemoved`. The DB row is preserved
+    /// so the workspace still appears in the recent-projects list.
     fn detach_workspace(&mut self, workspace: &Entity<Workspace>, cx: &mut Context<Self>) {
+        self.workspace_group_keys.remove(&workspace.entity_id());
+        cx.emit(MultiWorkspaceEvent::WorkspaceRemoved(workspace.entity_id()));
         workspace.update(cx, |workspace, _cx| {
             workspace.session_id.take();
             workspace._schedule_serialize_workspace.take();
@@ -1150,6 +1414,46 @@ impl MultiWorkspace {
         tasks
     }
 
+    #[cfg(any(test, feature = "test-support"))]
+    pub fn assert_project_group_key_integrity(&self, cx: &App) -> anyhow::Result<()> {
+        let stored_keys: HashSet<&ProjectGroupKey> = self.project_group_keys().collect();
+
+        let workspace_group_keys: HashSet<&ProjectGroupKey> =
+            self.workspace_group_keys.values().collect();
+        let extra_keys = &workspace_group_keys - &stored_keys;
+        anyhow::ensure!(
+            extra_keys.is_empty(),
+            "workspace_group_keys values not in project_group_keys: {:?}",
+            extra_keys,
+        );
+
+        let cached_ids: HashSet<EntityId> = self.workspace_group_keys.keys().copied().collect();
+        let workspace_ids: HashSet<EntityId> =
+            self.workspaces.iter().map(|ws| ws.entity_id()).collect();
+        anyhow::ensure!(
+            cached_ids == workspace_ids,
+            "workspace_group_keys entity IDs don't match workspaces.\n\
+             only in cache: {:?}\n\
+             only in workspaces: {:?}",
+            &cached_ids - &workspace_ids,
+            &workspace_ids - &cached_ids,
+        );
+
+        for workspace in self.workspaces() {
+            let live_key = workspace.read(cx).project_group_key(cx);
+            let cached_key = &self.workspace_group_keys[&workspace.entity_id()];
+            anyhow::ensure!(
+                *cached_key == live_key,
+                "workspace {:?} has live key {:?} but cached key {:?}",
+                workspace.entity_id(),
+                live_key,
+                cached_key,
+            );
+        }
+
+        Ok(())
+    }
+
     #[cfg(any(test, feature = "test-support"))]
     pub fn set_random_database_id(&mut self, cx: &mut Context<Self>) {
         self.workspace().update(cx, |workspace, _cx| {
@@ -1308,7 +1612,6 @@ impl MultiWorkspace {
 
                 for workspace in &removed_workspaces {
                     this.detach_workspace(workspace, cx);
-                    cx.emit(MultiWorkspaceEvent::WorkspaceRemoved(workspace.entity_id()));
                 }
 
                 let removed_any = !removed_workspaces.is_empty();

crates/workspace/src/multi_workspace_tests.rs 🔗

@@ -185,157 +185,3 @@ async fn test_project_group_keys_duplicate_not_added(cx: &mut TestAppContext) {
         );
     });
 }
-
-#[gpui::test]
-async fn test_project_group_keys_on_worktree_added(cx: &mut TestAppContext) {
-    init_test(cx);
-    let fs = FakeFs::new(cx.executor());
-    fs.insert_tree("/root_a", json!({ "file.txt": "" })).await;
-    fs.insert_tree("/root_b", json!({ "file.txt": "" })).await;
-    let project = Project::test(fs, ["/root_a".as_ref()], cx).await;
-
-    let initial_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
-
-    let (multi_workspace, cx) =
-        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
-
-    multi_workspace.update(cx, |mw, cx| {
-        mw.open_sidebar(cx);
-    });
-
-    // Add a second worktree to the same project.
-    let (worktree, _) = project
-        .update(cx, |project, cx| {
-            project.find_or_create_worktree("/root_b", true, cx)
-        })
-        .await
-        .unwrap();
-    worktree
-        .read_with(cx, |tree, _| tree.as_local().unwrap().scan_complete())
-        .await;
-    cx.run_until_parked();
-
-    let updated_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
-    assert_ne!(
-        initial_key, updated_key,
-        "key should change after adding a worktree"
-    );
-
-    multi_workspace.read_with(cx, |mw, _cx| {
-        let keys: Vec<&ProjectGroupKey> = mw.project_group_keys().collect();
-        assert_eq!(
-            keys.len(),
-            2,
-            "should have both the original and updated key"
-        );
-        assert_eq!(*keys[0], updated_key);
-        assert_eq!(*keys[1], initial_key);
-    });
-}
-
-#[gpui::test]
-async fn test_project_group_keys_on_worktree_removed(cx: &mut TestAppContext) {
-    init_test(cx);
-    let fs = FakeFs::new(cx.executor());
-    fs.insert_tree("/root_a", json!({ "file.txt": "" })).await;
-    fs.insert_tree("/root_b", json!({ "file.txt": "" })).await;
-    let project = Project::test(fs, ["/root_a".as_ref(), "/root_b".as_ref()], cx).await;
-
-    let initial_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
-
-    let (multi_workspace, cx) =
-        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
-
-    multi_workspace.update(cx, |mw, cx| {
-        mw.open_sidebar(cx);
-    });
-
-    // Remove one worktree.
-    let worktree_b_id = project.read_with(cx, |project, cx| {
-        project
-            .worktrees(cx)
-            .find(|wt| wt.read(cx).root_name().as_unix_str() == "root_b")
-            .unwrap()
-            .read(cx)
-            .id()
-    });
-    project.update(cx, |project, cx| {
-        project.remove_worktree(worktree_b_id, cx);
-    });
-    cx.run_until_parked();
-
-    let updated_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
-    assert_ne!(
-        initial_key, updated_key,
-        "key should change after removing a worktree"
-    );
-
-    multi_workspace.read_with(cx, |mw, _cx| {
-        let keys: Vec<&ProjectGroupKey> = mw.project_group_keys().collect();
-        assert_eq!(
-            keys.len(),
-            2,
-            "should accumulate both the original and post-removal key"
-        );
-        assert_eq!(*keys[0], updated_key);
-        assert_eq!(*keys[1], initial_key);
-    });
-}
-
-#[gpui::test]
-async fn test_project_group_keys_across_multiple_workspaces_and_worktree_changes(
-    cx: &mut TestAppContext,
-) {
-    init_test(cx);
-    let fs = FakeFs::new(cx.executor());
-    fs.insert_tree("/root_a", json!({ "file.txt": "" })).await;
-    fs.insert_tree("/root_b", json!({ "file.txt": "" })).await;
-    fs.insert_tree("/root_c", json!({ "file.txt": "" })).await;
-    let project_a = Project::test(fs.clone(), ["/root_a".as_ref()], cx).await;
-    let project_b = Project::test(fs.clone(), ["/root_b".as_ref()], cx).await;
-
-    let key_a = project_a.read_with(cx, |p, cx| p.project_group_key(cx));
-    let key_b = project_b.read_with(cx, |p, cx| p.project_group_key(cx));
-
-    let (multi_workspace, cx) =
-        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project_a.clone(), window, cx));
-
-    multi_workspace.update(cx, |mw, cx| {
-        mw.open_sidebar(cx);
-    });
-
-    multi_workspace.update_in(cx, |mw, window, cx| {
-        mw.test_add_workspace(project_b, window, cx);
-    });
-
-    multi_workspace.read_with(cx, |mw, _cx| {
-        assert_eq!(mw.project_group_keys().count(), 2);
-    });
-
-    // Now add a worktree to project_a. This should produce a third key.
-    let (worktree, _) = project_a
-        .update(cx, |project, cx| {
-            project.find_or_create_worktree("/root_c", true, cx)
-        })
-        .await
-        .unwrap();
-    worktree
-        .read_with(cx, |tree, _| tree.as_local().unwrap().scan_complete())
-        .await;
-    cx.run_until_parked();
-
-    let key_a_updated = project_a.read_with(cx, |p, cx| p.project_group_key(cx));
-    assert_ne!(key_a, key_a_updated);
-
-    multi_workspace.read_with(cx, |mw, _cx| {
-        let keys: Vec<&ProjectGroupKey> = mw.project_group_keys().collect();
-        assert_eq!(
-            keys.len(),
-            3,
-            "should have key_a, key_b, and the updated key_a with root_c"
-        );
-        assert_eq!(*keys[0], key_a_updated);
-        assert_eq!(*keys[1], key_b);
-        assert_eq!(*keys[2], key_a);
-    });
-}

crates/workspace/src/pane.rs 🔗

@@ -4897,36 +4897,9 @@ fn dirty_message_for(buffer_path: Option<ProjectPath>, path_style: PathStyle) ->
 }
 
 pub fn tab_details(items: &[Box<dyn ItemHandle>], _window: &Window, cx: &App) -> Vec<usize> {
-    let mut tab_details = items.iter().map(|_| 0).collect::<Vec<_>>();
-    let mut tab_descriptions = HashMap::default();
-    let mut done = false;
-    while !done {
-        done = true;
-
-        // Store item indices by their tab description.
-        for (ix, (item, detail)) in items.iter().zip(&tab_details).enumerate() {
-            let description = item.tab_content_text(*detail, cx);
-            if *detail == 0 || description != item.tab_content_text(detail - 1, cx) {
-                tab_descriptions
-                    .entry(description)
-                    .or_insert(Vec::new())
-                    .push(ix);
-            }
-        }
-
-        // If two or more items have the same tab description, increase their level
-        // of detail and try again.
-        for (_, item_ixs) in tab_descriptions.drain() {
-            if item_ixs.len() > 1 {
-                done = false;
-                for ix in item_ixs {
-                    tab_details[ix] += 1;
-                }
-            }
-        }
-    }
-
-    tab_details
+    util::disambiguate::compute_disambiguation_details(items, |item, detail| {
+        item.tab_content_text(detail, cx)
+    })
 }
 
 pub fn render_item_indicator(item: Box<dyn ItemHandle>, cx: &App) -> Option<Indicator> {

crates/workspace/src/persistence.rs 🔗

@@ -1804,16 +1804,12 @@ impl WorkspaceDb {
         }
     }
 
-    async fn all_paths_exist_with_a_directory(
-        paths: &[PathBuf],
-        fs: &dyn Fs,
-        timestamp: Option<DateTime<Utc>>,
-    ) -> bool {
+    async fn all_paths_exist_with_a_directory(paths: &[PathBuf], fs: &dyn Fs) -> bool {
         let mut any_dir = false;
         for path in paths {
             match fs.metadata(path).await.ok().flatten() {
                 None => {
-                    return timestamp.is_some_and(|t| Utc::now() - t < chrono::Duration::days(7));
+                    return false;
                 }
                 Some(meta) => {
                     if meta.is_dir {
@@ -1839,9 +1835,9 @@ impl WorkspaceDb {
         )>,
     > {
         let mut result = Vec::new();
-        let mut delete_tasks = Vec::new();
+        let mut workspaces_to_delete = Vec::new();
         let remote_connections = self.remote_connections()?;
-
+        let now = Utc::now();
         for (id, paths, remote_connection_id, timestamp) in self.recent_workspaces()? {
             if let Some(remote_connection_id) = remote_connection_id {
                 if let Some(connection_options) = remote_connections.get(&remote_connection_id) {
@@ -1852,34 +1848,40 @@ impl WorkspaceDb {
                         timestamp,
                     ));
                 } else {
-                    delete_tasks.push(self.delete_workspace_by_id(id));
+                    workspaces_to_delete.push(id);
                 }
                 continue;
             }
 
-            let has_wsl_path = if cfg!(windows) {
-                paths
+            // Delete the workspace if any of the paths are WSL paths. If a
+            // local workspace points to WSL, attempting to read its metadata
+            // will wait for the WSL VM and file server to boot up. This can
+            // block for many seconds. Supported scenarios use remote
+            // workspaces.
+            if cfg!(windows) {
+                let has_wsl_path = paths
                     .paths()
                     .iter()
-                    .any(|path| util::paths::WslPath::from_path(path).is_some())
-            } else {
-                false
-            };
+                    .any(|path| util::paths::WslPath::from_path(path).is_some());
+                if has_wsl_path {
+                    workspaces_to_delete.push(id);
+                    continue;
+                }
+            }
 
-            // Delete the workspace if any of the paths are WSL paths.
-            // If a local workspace points to WSL, this check will cause us to wait for the
-            // WSL VM and file server to boot up. This can block for many seconds.
-            // Supported scenarios use remote workspaces.
-            if !has_wsl_path
-                && Self::all_paths_exist_with_a_directory(paths.paths(), fs, Some(timestamp)).await
-            {
+            if Self::all_paths_exist_with_a_directory(paths.paths(), fs).await {
                 result.push((id, SerializedWorkspaceLocation::Local, paths, timestamp));
-            } else {
-                delete_tasks.push(self.delete_workspace_by_id(id));
+            } else if now - timestamp >= chrono::Duration::days(7) {
+                workspaces_to_delete.push(id);
             }
         }
 
-        futures::future::join_all(delete_tasks).await;
+        futures::future::join_all(
+            workspaces_to_delete
+                .into_iter()
+                .map(|id| self.delete_workspace_by_id(id)),
+        )
+        .await;
         Ok(result)
     }
 
@@ -1932,7 +1934,7 @@ impl WorkspaceDb {
                     window_id,
                 });
             } else {
-                if Self::all_paths_exist_with_a_directory(paths.paths(), fs, None).await {
+                if Self::all_paths_exist_with_a_directory(paths.paths(), fs).await {
                     workspaces.push(SessionWorkspace {
                         workspace_id,
                         location: SerializedWorkspaceLocation::Local,

crates/workspace/src/workspace.rs 🔗

@@ -86,7 +86,7 @@ pub use persistence::{
     WorkspaceDb, delete_unloaded_items,
     model::{
         DockStructure, ItemId, MultiWorkspaceState, SerializedMultiWorkspace,
-        SerializedWorkspaceLocation, SessionWorkspace,
+        SerializedProjectGroupKey, SerializedWorkspaceLocation, SessionWorkspace,
     },
     read_serialized_multi_workspaces, resolve_worktree_workspaces,
 };
@@ -3299,6 +3299,18 @@ impl Workspace {
         state.task.clone().unwrap()
     }
 
+    /// Prompts the user to save or discard each dirty item, returning
+    /// `true` if they confirmed (saved/discarded everything) or `false`
+    /// if they cancelled. Used before removing worktree roots during
+    /// thread archival.
+    pub fn prompt_to_save_or_discard_dirty_items(
+        &mut self,
+        window: &mut Window,
+        cx: &mut Context<Self>,
+    ) -> Task<Result<bool>> {
+        self.save_all_internal(SaveIntent::Close, window, cx)
+    }
+
     fn save_all_internal(
         &mut self,
         mut save_intent: SaveIntent,
@@ -8682,12 +8694,6 @@ pub async fn restore_multiworkspace(
         active_workspace,
         state,
     } = multi_workspace;
-    let MultiWorkspaceState {
-        sidebar_open,
-        project_group_keys,
-        sidebar_state,
-        ..
-    } = state;
 
     let workspace_result = if active_workspace.paths.is_empty() {
         cx.update(|cx| {
@@ -8715,9 +8721,8 @@ pub async fn restore_multiworkspace(
         Err(err) => {
             log::error!("Failed to restore active workspace: {err:#}");
 
-            // Try each project group's paths as a fallback.
             let mut fallback_handle = None;
-            for key in &project_group_keys {
+            for key in &state.project_group_keys {
                 let key: ProjectGroupKey = key.clone().into();
                 let paths = key.path_list().paths().to_vec();
                 match cx
@@ -8748,17 +8753,47 @@ pub async fn restore_multiworkspace(
         }
     };
 
-    if !project_group_keys.is_empty() {
-        let fs = app_state.fs.clone();
+    apply_restored_multiworkspace_state(window_handle, &state, app_state.fs.clone(), cx).await;
+
+    window_handle
+        .update(cx, |_, window, _cx| {
+            window.activate_window();
+        })
+        .ok();
+
+    Ok(window_handle)
+}
 
+pub async fn apply_restored_multiworkspace_state(
+    window_handle: WindowHandle<MultiWorkspace>,
+    state: &MultiWorkspaceState,
+    fs: Arc<dyn fs::Fs>,
+    cx: &mut AsyncApp,
+) {
+    let MultiWorkspaceState {
+        sidebar_open,
+        project_group_keys,
+        sidebar_state,
+        ..
+    } = state;
+
+    if !project_group_keys.is_empty() {
         // Resolve linked worktree paths to their main repo paths so
         // stale keys from previous sessions get normalized and deduped.
         let mut resolved_keys: Vec<ProjectGroupKey> = Vec::new();
-        for key in project_group_keys.into_iter().map(ProjectGroupKey::from) {
+        for key in project_group_keys
+            .iter()
+            .cloned()
+            .map(ProjectGroupKey::from)
+        {
+            if key.path_list().paths().is_empty() {
+                continue;
+            }
             let mut resolved_paths = Vec::new();
             for path in key.path_list().paths() {
-                if let Some(common_dir) =
-                    project::discover_root_repo_common_dir(path, fs.as_ref()).await
+                if key.host().is_none()
+                    && let Some(common_dir) =
+                        project::discover_root_repo_common_dir(path, fs.as_ref()).await
                 {
                     let main_path = common_dir.parent().unwrap_or(&common_dir);
                     resolved_paths.push(main_path.to_path_buf());
@@ -8779,7 +8814,7 @@ pub async fn restore_multiworkspace(
             .ok();
     }
 
-    if sidebar_open {
+    if *sidebar_open {
         window_handle
             .update(cx, |multi_workspace, _, cx| {
                 multi_workspace.open_sidebar(cx);
@@ -8791,20 +8826,12 @@ pub async fn restore_multiworkspace(
         window_handle
             .update(cx, |multi_workspace, window, cx| {
                 if let Some(sidebar) = multi_workspace.sidebar() {
-                    sidebar.restore_serialized_state(&sidebar_state, window, cx);
+                    sidebar.restore_serialized_state(sidebar_state, window, cx);
                 }
                 multi_workspace.serialize(cx);
             })
             .ok();
     }
-
-    window_handle
-        .update(cx, |_, window, _cx| {
-            window.activate_window();
-        })
-        .ok();
-
-    Ok(window_handle)
 }
 
 actions!(
@@ -9733,6 +9760,7 @@ pub fn open_remote_project_with_new_connection(
             serialized_workspace,
             app_state,
             window,
+            None,
             cx,
         )
         .await
@@ -9745,6 +9773,7 @@ pub fn open_remote_project_with_existing_connection(
     paths: Vec<PathBuf>,
     app_state: Arc<AppState>,
     window: WindowHandle<MultiWorkspace>,
+    provisional_project_group_key: Option<ProjectGroupKey>,
     cx: &mut AsyncApp,
 ) -> Task<Result<Vec<Option<Box<dyn ItemHandle>>>>> {
     cx.spawn(async move |cx| {
@@ -9758,6 +9787,7 @@ pub fn open_remote_project_with_existing_connection(
             serialized_workspace,
             app_state,
             window,
+            provisional_project_group_key,
             cx,
         )
         .await
@@ -9771,6 +9801,7 @@ async fn open_remote_project_inner(
     serialized_workspace: Option<SerializedWorkspace>,
     app_state: Arc<AppState>,
     window: WindowHandle<MultiWorkspace>,
+    provisional_project_group_key: Option<ProjectGroupKey>,
     cx: &mut AsyncApp,
 ) -> Result<Vec<Option<Box<dyn ItemHandle>>>> {
     let db = cx.update(|cx| WorkspaceDb::global(cx));
@@ -9831,6 +9862,9 @@ async fn open_remote_project_inner(
             workspace
         });
 
+        if let Some(project_group_key) = provisional_project_group_key.clone() {
+            multi_workspace.set_workspace_group_key(&new_workspace, project_group_key);
+        }
         multi_workspace.activate(new_workspace.clone(), window, cx);
         new_workspace
     })?;

crates/worktree/src/worktree.rs 🔗

@@ -510,7 +510,7 @@ impl Worktree {
         cx: &mut App,
     ) -> Entity<Self> {
         cx.new(|cx: &mut Context<Self>| {
-            let snapshot = Snapshot::new(
+            let mut snapshot = Snapshot::new(
                 WorktreeId::from_proto(worktree.id),
                 RelPath::from_proto(&worktree.root_name)
                     .unwrap_or_else(|_| RelPath::empty().into()),
@@ -518,6 +518,10 @@ impl Worktree {
                 path_style,
             );
 
+            snapshot.root_repo_common_dir = worktree
+                .root_repo_common_dir
+                .map(|p| SanitizedPath::new_arc(Path::new(&p)));
+
             let background_snapshot = Arc::new(Mutex::new((
                 snapshot.clone(),
                 Vec::<proto::UpdateWorktree>::new(),
@@ -676,6 +680,9 @@ impl Worktree {
             root_name: self.root_name().to_proto(),
             visible: self.is_visible(),
             abs_path: self.abs_path().to_string_lossy().into_owned(),
+            root_repo_common_dir: self
+                .root_repo_common_dir()
+                .map(|p| p.to_string_lossy().into_owned()),
         }
     }
 
@@ -2430,9 +2437,12 @@ impl Snapshot {
         self.entries_by_path.edit(entries_by_path_edits, ());
         self.entries_by_id.edit(entries_by_id_edits, ());
 
-        self.root_repo_common_dir = update
+        if let Some(dir) = update
             .root_repo_common_dir
-            .map(|p| SanitizedPath::new_arc(Path::new(&p)));
+            .map(|p| SanitizedPath::new_arc(Path::new(&p)))
+        {
+            self.root_repo_common_dir = Some(dir);
+        }
 
         self.scan_id = update.scan_id as usize;
         if update.is_last_update {

crates/zed/src/main.rs 🔗

@@ -7,7 +7,7 @@ mod zed;
 use agent::{SharedThread, ThreadStore};
 use agent_client_protocol;
 use agent_ui::AgentPanel;
-use anyhow::{Context as _, Error, Result};
+use anyhow::{Context as _, Result};
 use clap::Parser;
 use cli::FORCE_CLI_MODE_ENV_VAR_NAME;
 use client::{Client, ProxySettings, RefreshLlmTokenListener, UserStore, parse_zed_link};
@@ -1357,54 +1357,56 @@ pub(crate) async fn restore_or_create_workspace(
     cx: &mut AsyncApp,
 ) -> Result<()> {
     let kvp = cx.update(|cx| KeyValueStore::global(cx));
-    if let Some((multi_workspaces, remote_workspaces)) = restorable_workspaces(cx, &app_state).await
-    {
-        let mut results: Vec<Result<(), Error>> = Vec::new();
-        let mut tasks = Vec::new();
-
+    if let Some(multi_workspaces) = restorable_workspaces(cx, &app_state).await {
+        let mut error_count = 0;
         for multi_workspace in multi_workspaces {
-            if let Err(error) = restore_multiworkspace(multi_workspace, app_state.clone(), cx).await
-            {
-                log::error!("Failed to restore workspace: {error:#}");
-                results.push(Err(error));
-            }
-        }
+            let result = match &multi_workspace.active_workspace.location {
+                SerializedWorkspaceLocation::Local => {
+                    restore_multiworkspace(multi_workspace, app_state.clone(), cx)
+                        .await
+                        .map(|_| ())
+                }
+                SerializedWorkspaceLocation::Remote(connection_options) => {
+                    let mut connection_options = connection_options.clone();
+                    if let RemoteConnectionOptions::Ssh(options) = &mut connection_options {
+                        cx.update(|cx| {
+                            RemoteSettings::get_global(cx)
+                                .fill_connection_options_from_settings(options)
+                        });
+                    }
 
-        for session_workspace in remote_workspaces {
-            let app_state = app_state.clone();
-            let SerializedWorkspaceLocation::Remote(mut connection_options) =
-                session_workspace.location
-            else {
-                continue;
+                    let paths = multi_workspace
+                        .active_workspace
+                        .paths
+                        .paths()
+                        .iter()
+                        .map(PathBuf::from)
+                        .collect::<Vec<_>>();
+                    let state = multi_workspace.state.clone();
+                    async {
+                        let window = open_remote_project(
+                            connection_options,
+                            paths,
+                            app_state.clone(),
+                            workspace::OpenOptions::default(),
+                            cx,
+                        )
+                        .await?;
+                        workspace::apply_restored_multiworkspace_state(
+                            window,
+                            &state,
+                            app_state.fs.clone(),
+                            cx,
+                        )
+                        .await;
+                        Ok::<(), anyhow::Error>(())
+                    }
+                    .await
+                }
             };
-            let paths = session_workspace.paths;
-            if let RemoteConnectionOptions::Ssh(options) = &mut connection_options {
-                cx.update(|cx| {
-                    RemoteSettings::get_global(cx).fill_connection_options_from_settings(options)
-                });
-            }
-            let task = cx.spawn(async move |cx| {
-                recent_projects::open_remote_project(
-                    connection_options,
-                    paths.paths().iter().map(PathBuf::from).collect(),
-                    app_state,
-                    workspace::OpenOptions::default(),
-                    cx,
-                )
-                .await
-                .map_err(|e| anyhow::anyhow!(e))
-            });
-            tasks.push(task);
-        }
 
-        // Wait for all window groups and remote workspaces to open concurrently
-        results.extend(future::join_all(tasks).await);
-
-        // Show notifications for any errors that occurred
-        let mut error_count = 0;
-        for result in results {
-            if let Err(e) = result {
-                log::error!("Failed to restore workspace: {}", e);
+            if let Err(error) = result {
+                log::error!("Failed to restore workspace: {error:#}");
                 error_count += 1;
             }
         }
@@ -1487,17 +1489,9 @@ pub(crate) async fn restore_or_create_workspace(
 async fn restorable_workspaces(
     cx: &mut AsyncApp,
     app_state: &Arc<AppState>,
-) -> Option<(
-    Vec<workspace::SerializedMultiWorkspace>,
-    Vec<SessionWorkspace>,
-)> {
+) -> Option<Vec<workspace::SerializedMultiWorkspace>> {
     let locations = restorable_workspace_locations(cx, app_state).await?;
-    let (remote_workspaces, local_workspaces) = locations
-        .into_iter()
-        .partition(|sw| matches!(sw.location, SerializedWorkspaceLocation::Remote(_)));
-    let multi_workspaces =
-        cx.update(|cx| workspace::read_serialized_multi_workspaces(local_workspaces, cx));
-    Some((multi_workspaces, remote_workspaces))
+    Some(cx.update(|cx| workspace::read_serialized_multi_workspaces(locations, cx)))
 }
 
 pub(crate) async fn restorable_workspace_locations(

crates/zed/src/visual_test_runner.rs 🔗

@@ -573,6 +573,27 @@ fn run_visual_tests(project_path: PathBuf, update_baseline: bool) -> Result<()>
         }
     }
 
+    // Run Test: Sidebar with duplicate project names
+    println!("\n--- Test: sidebar_duplicate_names ---");
+    match run_sidebar_duplicate_project_names_visual_tests(
+        app_state.clone(),
+        &mut cx,
+        update_baseline,
+    ) {
+        Ok(TestResult::Passed) => {
+            println!("✓ sidebar_duplicate_names: PASSED");
+            passed += 1;
+        }
+        Ok(TestResult::BaselineUpdated(_)) => {
+            println!("✓ sidebar_duplicate_names: Baselines updated");
+            updated += 1;
+        }
+        Err(e) => {
+            eprintln!("✗ sidebar_duplicate_names: FAILED - {}", e);
+            failed += 1;
+        }
+    }
+
     // Run Test 9: Tool Permissions Settings UI visual test
     println!("\n--- Test 9: tool_permissions_settings ---");
     match run_tool_permissions_visual_tests(app_state.clone(), &mut cx, update_baseline) {
@@ -3069,6 +3090,279 @@ fn run_git_command(args: &[&str], dir: &std::path::Path) -> Result<()> {
     Ok(())
 }
 
+#[cfg(target_os = "macos")]
+/// Helper to create a project, add a worktree at the given path, and return the project.
+fn create_project_with_worktree(
+    worktree_dir: &Path,
+    app_state: &Arc<AppState>,
+    cx: &mut VisualTestAppContext,
+) -> Result<Entity<Project>> {
+    let project = cx.update(|cx| {
+        project::Project::local(
+            app_state.client.clone(),
+            app_state.node_runtime.clone(),
+            app_state.user_store.clone(),
+            app_state.languages.clone(),
+            app_state.fs.clone(),
+            None,
+            project::LocalProjectFlags {
+                init_worktree_trust: false,
+                ..Default::default()
+            },
+            cx,
+        )
+    });
+
+    let add_task = cx.update(|cx| {
+        project.update(cx, |project, cx| {
+            project.find_or_create_worktree(worktree_dir, true, cx)
+        })
+    });
+
+    cx.background_executor.allow_parking();
+    cx.foreground_executor
+        .block_test(add_task)
+        .context("Failed to add worktree")?;
+    cx.background_executor.forbid_parking();
+
+    cx.run_until_parked();
+    Ok(project)
+}
+
+#[cfg(target_os = "macos")]
+fn open_sidebar_test_window(
+    projects: Vec<Entity<Project>>,
+    app_state: &Arc<AppState>,
+    cx: &mut VisualTestAppContext,
+) -> Result<WindowHandle<MultiWorkspace>> {
+    anyhow::ensure!(!projects.is_empty(), "need at least one project");
+
+    let window_size = size(px(400.0), px(600.0));
+    let bounds = Bounds {
+        origin: point(px(0.0), px(0.0)),
+        size: window_size,
+    };
+
+    let mut projects_iter = projects.into_iter();
+    let first_project = projects_iter
+        .next()
+        .ok_or_else(|| anyhow::anyhow!("need at least one project"))?;
+    let remaining: Vec<_> = projects_iter.collect();
+
+    let multi_workspace_window: WindowHandle<MultiWorkspace> = cx
+        .update(|cx| {
+            cx.open_window(
+                WindowOptions {
+                    window_bounds: Some(WindowBounds::Windowed(bounds)),
+                    focus: false,
+                    show: false,
+                    ..Default::default()
+                },
+                |window, cx| {
+                    let first_ws = cx.new(|cx| {
+                        Workspace::new(None, first_project.clone(), app_state.clone(), window, cx)
+                    });
+                    cx.new(|cx| {
+                        let mut mw = MultiWorkspace::new(first_ws, window, cx);
+                        for project in remaining {
+                            let ws = cx.new(|cx| {
+                                Workspace::new(None, project, app_state.clone(), window, cx)
+                            });
+                            mw.activate(ws, window, cx);
+                        }
+                        mw
+                    })
+                },
+            )
+        })
+        .context("Failed to open MultiWorkspace window")?;
+
+    cx.run_until_parked();
+
+    // Create the sidebar outside the MultiWorkspace update to avoid a
+    // re-entrant read panic (Sidebar::new reads the MultiWorkspace).
+    let sidebar = cx
+        .update_window(multi_workspace_window.into(), |root_view, window, cx| {
+            let mw_handle: Entity<MultiWorkspace> = root_view
+                .downcast()
+                .map_err(|_| anyhow::anyhow!("Failed to downcast root view to MultiWorkspace"))?;
+            Ok::<_, anyhow::Error>(cx.new(|cx| sidebar::Sidebar::new(mw_handle, window, cx)))
+        })
+        .context("Failed to create sidebar")??;
+
+    multi_workspace_window
+        .update(cx, |mw, _window, cx| {
+            mw.register_sidebar(sidebar.clone(), cx);
+        })
+        .context("Failed to register sidebar")?;
+
+    cx.run_until_parked();
+
+    // Open the sidebar
+    multi_workspace_window
+        .update(cx, |mw, window, cx| {
+            mw.toggle_sidebar(window, cx);
+        })
+        .context("Failed to toggle sidebar")?;
+
+    // Let rendering settle
+    for _ in 0..10 {
+        cx.advance_clock(Duration::from_millis(100));
+        cx.run_until_parked();
+    }
+
+    // Refresh the window
+    cx.update_window(multi_workspace_window.into(), |_, window, _cx| {
+        window.refresh();
+    })?;
+
+    cx.run_until_parked();
+
+    Ok(multi_workspace_window)
+}
+
+#[cfg(target_os = "macos")]
+fn cleanup_sidebar_test_window(
+    window: WindowHandle<MultiWorkspace>,
+    cx: &mut VisualTestAppContext,
+) -> Result<()> {
+    window.update(cx, |mw, _window, cx| {
+        for workspace in mw.workspaces() {
+            let project = workspace.read(cx).project().clone();
+            project.update(cx, |project, cx| {
+                let ids: Vec<_> = project.worktrees(cx).map(|wt| wt.read(cx).id()).collect();
+                for id in ids {
+                    project.remove_worktree(id, cx);
+                }
+            });
+        }
+    })?;
+
+    cx.run_until_parked();
+
+    cx.update_window(window.into(), |_, window, _cx| {
+        window.remove_window();
+    })?;
+
+    cx.run_until_parked();
+
+    for _ in 0..15 {
+        cx.advance_clock(Duration::from_millis(100));
+        cx.run_until_parked();
+    }
+
+    Ok(())
+}
+
+#[cfg(target_os = "macos")]
+fn run_sidebar_duplicate_project_names_visual_tests(
+    app_state: Arc<AppState>,
+    cx: &mut VisualTestAppContext,
+    update_baseline: bool,
+) -> Result<TestResult> {
+    let temp_dir = tempfile::tempdir()?;
+    let temp_path = temp_dir.keep();
+    let canonical_temp = temp_path.canonicalize()?;
+
+    // Create directory structure where every leaf directory is named "zed" but
+    // lives at a distinct path. This lets us test that the sidebar correctly
+    // disambiguates projects whose names would otherwise collide.
+    //
+    //   code/zed/       — project1 (single worktree)
+    //   code/foo/zed/   — project2 (single worktree)
+    //   code/bar/zed/   — project3, first worktree
+    //   code/baz/zed/   — project3, second worktree
+    //
+    // No two projects share a worktree path, so ProjectGroupBuilder will
+    // place each in its own group.
+    let code_zed = canonical_temp.join("code").join("zed");
+    let foo_zed = canonical_temp.join("code").join("foo").join("zed");
+    let bar_zed = canonical_temp.join("code").join("bar").join("zed");
+    let baz_zed = canonical_temp.join("code").join("baz").join("zed");
+    std::fs::create_dir_all(&code_zed)?;
+    std::fs::create_dir_all(&foo_zed)?;
+    std::fs::create_dir_all(&bar_zed)?;
+    std::fs::create_dir_all(&baz_zed)?;
+
+    cx.update(|cx| {
+        cx.update_flags(true, vec!["agent-v2".to_string()]);
+    });
+
+    let mut has_baseline_update = None;
+
+    // Two single-worktree projects whose leaf name is "zed"
+    {
+        let project1 = create_project_with_worktree(&code_zed, &app_state, cx)?;
+        let project2 = create_project_with_worktree(&foo_zed, &app_state, cx)?;
+
+        let window = open_sidebar_test_window(vec![project1, project2], &app_state, cx)?;
+
+        let result = run_visual_test(
+            "sidebar_two_projects_same_leaf_name",
+            window.into(),
+            cx,
+            update_baseline,
+        );
+
+        cleanup_sidebar_test_window(window, cx)?;
+        match result? {
+            TestResult::Passed => {}
+            TestResult::BaselineUpdated(path) => {
+                has_baseline_update = Some(path);
+            }
+        }
+    }
+
+    // Three projects, third has two worktrees (all leaf names "zed")
+    //
+    // project1: code/zed
+    // project2: code/foo/zed
+    // project3: code/bar/zed + code/baz/zed
+    //
+    // Each project has a unique set of worktree paths, so they form
+    // separate groups. The sidebar must disambiguate all three.
+    {
+        let project1 = create_project_with_worktree(&code_zed, &app_state, cx)?;
+        let project2 = create_project_with_worktree(&foo_zed, &app_state, cx)?;
+
+        let project3 = create_project_with_worktree(&bar_zed, &app_state, cx)?;
+        let add_second_worktree = cx.update(|cx| {
+            project3.update(cx, |project, cx| {
+                project.find_or_create_worktree(&baz_zed, true, cx)
+            })
+        });
+        cx.background_executor.allow_parking();
+        cx.foreground_executor
+            .block_test(add_second_worktree)
+            .context("Failed to add second worktree to project 3")?;
+        cx.background_executor.forbid_parking();
+        cx.run_until_parked();
+
+        let window = open_sidebar_test_window(vec![project1, project2, project3], &app_state, cx)?;
+
+        let result = run_visual_test(
+            "sidebar_three_projects_with_multi_worktree",
+            window.into(),
+            cx,
+            update_baseline,
+        );
+
+        cleanup_sidebar_test_window(window, cx)?;
+        match result? {
+            TestResult::Passed => {}
+            TestResult::BaselineUpdated(path) => {
+                has_baseline_update = Some(path);
+            }
+        }
+    }
+
+    if let Some(path) = has_baseline_update {
+        Ok(TestResult::BaselineUpdated(path))
+    } else {
+        Ok(TestResult::Passed)
+    }
+}
+
 #[cfg(all(target_os = "macos", feature = "visual-tests"))]
 fn run_start_thread_in_selector_visual_tests(
     app_state: Arc<AppState>,

crates/zed/src/zed.rs 🔗

@@ -2052,6 +2052,7 @@ pub fn open_new_ssh_project_from_project(
             cx,
         )
         .await
+        .map(|_| ())
     })
 }