agent_ui: Unify draft and background threads into retained threads (#53737)

Nathan Sobo , Mikayla Maki , and Mikayla Maki created

> **Foundation PRs:** #53732, #53733, #53734 — These three draft PRs
contain the base refactors (retained workspaces, agent panel overlay
split, ThreadId introduction) that this PR builds on.

## Goal

Remove `DraftId` and merge `draft_threads` + `background_threads` into a
single `retained_threads: HashMap<ThreadId, Entity<ConversationView>>`
in `AgentPanel`. A draft is just a thread that hasn't sent its first
message — no separate identity or storage needed.

## Changes

### agent_panel.rs
- Remove `DraftId` / `DraftIdCounter` / the `Global` impl
- Merge the two maps into `retained_threads`
- Add `thread_id: ThreadId` field to `BaseView::AgentThread`
- Rename methods: `create_draft` → `create_thread`, `activate_draft` →
`activate_retained_thread`, `remove_draft` → `remove_thread`, etc.
- Replace `clear_active_thread` with `show_or_create_empty_draft`
- Update `update_thread_work_dirs` to sync `ThreadMetadataStore` paths
when worktrees change
- Keep `load_agent_thread(...)` cleanup so activating a real thread
removes empty retained drafts

### sidebar.rs
- Remove `active_entry` derivation from `rebuild_contents` (was racing
with deferred effects)
- Add `sync_active_entry_from_panel` called from event handlers instead
- Simplify `ActiveEntry` — remove `ThreadActivation` struct, make
`session_id` optional
- Move `seen_thread_ids` to global scope (was per-group, causing
duplicate thread entries)
- Remove dead code: `clear_draft`, `render_draft_thread`
- Generalize `pending_remote_thread_activation` into
`pending_thread_activation` so all persisted-thread activations suppress
fallback draft reconciliation
- Set the activation guard before local persisted-thread activation
switches workspaces
- Make `reconcile_groups(...)` bail while a persisted-thread activation
is in flight
- On `ActiveViewChanged`, clear empty group drafts as soon as a pending
persisted-thread activation resolves

### thread_metadata_store.rs
- Make `ThreadMetadata.title` an `Option<SharedString>` with
`display_title()` fallback
- Add `update_worktree_paths` for batched path updates when project
worktrees change

### Other crates
- Update all `ThreadMetadata` construction sites and title display sites
across `agent_ui` and `sidebar`

## Fallback draft invariant

This PR now tightens the retained-thread invariant around fallback
drafts vs real thread activation/restoration:

- Fallback drafts are always empty
- User-created drafts worth preserving are non-empty
- While a persisted thread is being activated/restored/loaded, sidebar
reconciliation must not create an empty fallback draft for that target
group/workspace
- Once the real thread becomes active, empty fallback drafts in that
target group are removed

This is enforced by the sidebar-side activation guard plus existing
`AgentPanel` empty-draft cleanup after real-thread load.

## Tests

Added and/or kept focused sidebar regression coverage for:

-
`test_confirm_on_historical_thread_in_new_project_group_opens_real_thread`
-
`test_unarchive_into_inactive_existing_workspace_does_not_leave_active_draft`
-
`test_unarchive_after_removing_parent_project_group_restores_real_thread`
- `test_pending_thread_activation_suppresses_reconcile_draft_creation`

Focused test runs:

- `cargo test -p sidebar activate_archived_thread -- --nocapture`
- `cargo test -p sidebar unarchive -- --nocapture`
- `cargo test -p sidebar archive_last_thread_on_linked_worktree --
--nocapture`
- `cargo test -p sidebar
test_confirm_on_historical_thread_in_new_project_group_opens_real_thread
-- --nocapture`
- `cargo test -p sidebar
test_unarchive_into_inactive_existing_workspace_does_not_leave_active_draft
-- --nocapture`
- `cargo test -p sidebar
test_unarchive_after_removing_parent_project_group_restores_real_thread
-- --nocapture`
- `cargo test -p sidebar
test_pending_thread_activation_suppresses_reconcile_draft_creation --
--nocapture`

## Test fix

Fixed flaky `test_backfill_sets_kvp_flag` — added per-App `AppDatabase`
isolation in `setup_backfill_test` so backfill tests no longer share a
static in-memory DB.

Release Notes:

- N/A

---------

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
Co-authored-by: Mikayla Maki <mikayla@zed.dev>

Change summary

.gitignore                                            |   1 
.handoff-introduce-thread-id.md                       |  41 
Cargo.lock                                            |   1 
crates/acp_thread/src/connection.rs                   |  21 
crates/agent_ui/src/agent_panel.rs                    | 660 +++++++---
crates/agent_ui/src/agent_ui.rs                       |   8 
crates/agent_ui/src/conversation_view.rs              | 371 +++--
crates/agent_ui/src/conversation_view/thread_view.rs  | 153 +-
crates/agent_ui/src/test_support.rs                   |  10 
crates/agent_ui/src/thread_history_view.rs            |   2 
crates/agent_ui/src/thread_import.rs                  |  36 
crates/agent_ui/src/thread_metadata_store.rs          | 672 ++++++-----
crates/agent_ui/src/thread_worktree_archive.rs        |  35 
crates/agent_ui/src/threads_archive_view.rs           |  85 
crates/edit_prediction/src/license_detection.rs       |   2 
crates/project/src/lsp_store.rs                       |   2 
crates/project/src/manifest_tree.rs                   |   2 
crates/project/src/project.rs                         |  97 +
crates/project/src/worktree_store.rs                  | 144 ++
crates/recent_projects/src/recent_projects.rs         | 259 ++++
crates/recent_projects/src/sidebar_recent_projects.rs |   5 
crates/rpc/src/proto_client.rs                        |  40 
crates/sidebar/Cargo.toml                             |   1 
crates/sidebar/src/sidebar.rs                         | 617 ++++++---
crates/sidebar/src/sidebar_tests.rs                   | 595 +++++++--
crates/title_bar/src/title_bar.rs                     |   4 
crates/workspace/src/multi_workspace.rs               | 771 ++++++------
crates/workspace/src/multi_workspace_tests.rs         | 379 ++++++
crates/workspace/src/persistence.rs                   |  65 
crates/workspace/src/persistence/model.rs             |  54 
crates/workspace/src/workspace.rs                     |  89 
crates/worktree/src/worktree.rs                       |  34 
crates/worktree/tests/integration/main.rs             | 193 +++
crates/zed/src/zed.rs                                 | 166 ++
34 files changed, 3,899 insertions(+), 1,716 deletions(-)

Detailed changes

.gitignore 🔗

@@ -1,4 +1,5 @@
 **/*.db
+**/*.proptest-regressions
 **/cargo-target
 **/target
 **/venv

.handoff-introduce-thread-id.md 🔗

@@ -0,0 +1,41 @@
+# Handoff: Introduce ThreadId — Fix sidebar compilation
+
+## Branch: `introduce-thread-id`
+## Worktree: `/Users/nathan/src/worktrees/zed/pure-stork/zed`
+
+## What's done
+
+All `agent_ui` crate changes are complete and compiling:
+- `thread_metadata_store.rs`: New `ThreadId` type, DB migrations, all store methods updated
+- `thread_import.rs`, `thread_worktree_archive.rs`, `threads_archive_view.rs`: Updated
+- `thread_view.rs`: Added `is_draft()` method
+- `agent_ui.rs`: Re-exports `ThreadId`
+- `agent_panel.rs`: Minimal fixes (`entry_by_session`, `unarchive` lookup)
+
+Verify: `cargo check -p agent_ui` passes.
+
+## What remains
+
+`crates/sidebar/src/sidebar.rs` has ~58 compilation errors because `ThreadMetadata.session_id` changed from `acp::SessionId` to `Option<acp::SessionId>`, and several store methods changed signatures.
+
+Run `cargo check -p sidebar 2>&1` to see all errors.
+
+### Mechanical fix patterns needed:
+
+1. **Comparisons**: `metadata.session_id == *session_id` → `metadata.session_id.as_ref() == Some(session_id)`
+2. **HashSet collection**: Where `session_id` is collected into `HashSet<SessionId>`, use `.filter_map(|m| m.session_id.clone())` to extract from Option
+3. **Field access**: `session_id.0` → `session_id.as_ref().unwrap().0` or `if let Some(sid) = &session_id { sid.0 ... }`
+4. **CancelRestore event**: Pattern `CancelRestore { session_id }` → `CancelRestore { thread_id }`. Import `agent_ui::ThreadId`.
+5. **Store lookups**: `store.entry(&session_id)` → `store.entry_by_session(&session_id)`
+6. **Store mutations** (`delete`, `archive`, `unarchive`, `update_working_directories`): These now take `ThreadId`. Look up via `store.entry_by_session(&session_id).map(|t| t.thread_id)` first, then call with the `ThreadId`.
+7. **`entry_ids()`**: Now returns `Iterator<Item = ThreadId>` not `SessionId`
+8. **`cleanup_thread_archived_worktrees`**: Now takes `ThreadId` not `&acp::SessionId`
+
+### After fixing sidebar:
+
+1. Run tests: `cargo test -p agent_ui -- thread_metadata` and `cargo test -p agent_ui -- thread_import`
+2. Verify: `cargo check -p sidebar`
+
+## Reference
+
+The full target implementation exists on the `project-group-refactor` branch at `/Users/nathan/src/zed` if you need to check anything.

Cargo.lock 🔗

@@ -16079,6 +16079,7 @@ dependencies = [
  "chrono",
  "client",
  "clock",
+ "db",
  "editor",
  "extension",
  "fs",

crates/acp_thread/src/connection.rs 🔗

@@ -665,6 +665,23 @@ mod test_support {
         )
     }
 
+    /// Test-scoped counter for generating unique session IDs across all
+    /// `StubAgentConnection` instances within a test case. Set as a GPUI
+    /// global in test init so each case starts fresh.
+    pub struct StubSessionCounter(pub AtomicUsize);
+    impl gpui::Global for StubSessionCounter {}
+
+    impl StubSessionCounter {
+        pub fn next(cx: &App) -> usize {
+            cx.try_global::<Self>()
+                .map(|g| g.0.fetch_add(1, Ordering::SeqCst))
+                .unwrap_or_else(|| {
+                    static FALLBACK: AtomicUsize = AtomicUsize::new(0);
+                    FALLBACK.fetch_add(1, Ordering::SeqCst)
+                })
+        }
+    }
+
     #[derive(Clone)]
     pub struct StubAgentConnection {
         sessions: Arc<Mutex<HashMap<acp::SessionId, Session>>>,
@@ -823,9 +840,7 @@ mod test_support {
             work_dirs: PathList,
             cx: &mut gpui::App,
         ) -> Task<gpui::Result<Entity<AcpThread>>> {
-            static NEXT_SESSION_ID: AtomicUsize = AtomicUsize::new(0);
-            let session_id =
-                acp::SessionId::new(NEXT_SESSION_ID.fetch_add(1, Ordering::SeqCst).to_string());
+            let session_id = acp::SessionId::new(StubSessionCounter::next(cx).to_string());
             let thread = self.create_session(session_id, project, work_dirs, None, cx);
             Task::ready(Ok(thread))
         }

crates/agent_ui/src/agent_panel.rs 🔗

@@ -24,7 +24,7 @@ use zed_actions::agent::{
     ResolveConflictsWithAgent, ReviewBranchDiff,
 };
 
-use crate::thread_metadata_store::ThreadMetadataStore;
+use crate::thread_metadata_store::{ThreadId, ThreadMetadata, ThreadMetadataStore};
 use crate::{
     AddContextServer, AgentDiffPane, ConversationView, CopyThreadToClipboard, CycleStartThreadIn,
     Follow, InlineAssistant, LoadThreadFromClipboard, NewThread, NewWorktreeBranchTarget,
@@ -56,7 +56,7 @@ use extension_host::ExtensionStore;
 use fs::Fs;
 use gpui::{
     Action, Animation, AnimationExt, AnyElement, App, AsyncWindowContext, ClipboardItem, Corner,
-    DismissEvent, Entity, EntityId, EventEmitter, ExternalPaths, FocusHandle, Focusable, Global,
+    DismissEvent, Entity, EntityId, EventEmitter, ExternalPaths, FocusHandle, Focusable,
     KeyContext, Pixels, Subscription, Task, UpdateGlobal, WeakEntity, prelude::*,
     pulsating_between,
 };
@@ -64,7 +64,7 @@ use language::LanguageRegistry;
 use language_model::LanguageModelRegistry;
 use project::git_store::{GitStoreEvent, RepositoryEvent};
 use project::project_settings::ProjectSettings;
-use project::{Project, ProjectPath, Worktree, linked_worktree_short_name};
+use project::{Project, ProjectPath, Worktree, WorktreePaths, linked_worktree_short_name};
 use prompt_store::{PromptStore, UserPromptId};
 use remote::RemoteConnectionOptions;
 use rules_library::{RulesLibrary, open_rules_library};
@@ -80,7 +80,7 @@ use ui::{
 use util::{ResultExt as _, debug_panic};
 use workspace::{
     CollaboratorId, DraggedSelection, DraggedTab, PathList, SerializedPathList,
-    ToggleWorkspaceSidebar, ToggleZoom, Workspace, WorkspaceId,
+    ToggleWorkspaceSidebar, ToggleZoom, Workspace, WorkspaceId, WorkspaceSidebarDelegate,
     dock::{DockPosition, Panel, PanelEvent},
 };
 use zed_actions::{
@@ -93,6 +93,49 @@ const AGENT_PANEL_KEY: &str = "agent_panel";
 const MIN_PANEL_WIDTH: Pixels = px(300.);
 const RECENTLY_UPDATED_MENU_LIMIT: usize = 6;
 const LAST_USED_AGENT_KEY: &str = "agent_panel__last_used_external_agent";
+/// Maximum number of idle threads kept in the agent panel's retained list.
+/// Set as a GPUI global to override; otherwise defaults to 5.
+pub struct MaxIdleRetainedThreads(pub usize);
+impl gpui::Global for MaxIdleRetainedThreads {}
+
+impl MaxIdleRetainedThreads {
+    pub fn global(cx: &App) -> usize {
+        cx.try_global::<Self>().map_or(5, |g| g.0)
+    }
+}
+
+#[derive(Default)]
+struct AgentPanelSidebarDelegate;
+
+impl WorkspaceSidebarDelegate for AgentPanelSidebarDelegate {
+    fn reconcile_group(
+        &self,
+        workspace: &mut Workspace,
+        group_key: &workspace::ProjectGroupKey,
+        window: &mut Window,
+        cx: &mut Context<Workspace>,
+    ) -> bool {
+        if workspace.project_group_key(cx) != *group_key {
+            return false;
+        }
+
+        let Some(panel) = workspace.panel::<AgentPanel>(cx) else {
+            return false;
+        };
+
+        panel.update(cx, |panel, cx| {
+            if panel.pending_thread_loads > 0 {
+                return false;
+            }
+            if panel.draft_thread_ids(cx).is_empty() {
+                panel.create_thread(window, cx);
+                true
+            } else {
+                false
+            }
+        })
+    }
+}
 
 #[derive(Serialize, Deserialize)]
 struct LastUsedAgent {
@@ -170,6 +213,7 @@ struct SerializedActiveThread {
 pub fn init(cx: &mut App) {
     cx.observe_new(
         |workspace: &mut Workspace, _window, _cx: &mut Context<Workspace>| {
+            workspace.set_sidebar_delegate(Arc::new(AgentPanelSidebarDelegate));
             workspace
                 .register_action(|workspace, action: &NewThread, window, cx| {
                     if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
@@ -209,8 +253,8 @@ pub fn init(cx: &mut App) {
                     if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
                         workspace.focus_panel::<AgentPanel>(window, cx);
                         panel.update(cx, |panel, cx| {
-                            let id = panel.create_draft(window, cx);
-                            panel.activate_draft(id, true, window, cx);
+                            let id = panel.create_thread(window, cx);
+                            panel.activate_retained_thread(id, true, window, cx);
                         });
                     }
                 })
@@ -594,36 +638,37 @@ fn build_conflicted_files_resolution_prompt(
     content
 }
 
-/// Unique identifier for a sidebar draft thread. Not persisted across restarts.
-/// IDs are globally unique across all AgentPanel instances within the same app.
-#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
-pub struct DraftId(pub usize);
-
-#[derive(Default)]
-struct DraftIdCounter(usize);
-
-impl Global for DraftIdCounter {}
-
-impl DraftId {
-    fn next(cx: &mut App) -> Self {
-        let counter = cx.default_global::<DraftIdCounter>();
-        let id = counter.0;
-        counter.0 += 1;
-        Self(id)
-    }
+pub(crate) struct AgentThread {
+    conversation_view: Entity<ConversationView>,
 }
 
-enum ActiveView {
+enum BaseView {
     Uninitialized,
     AgentThread {
         conversation_view: Entity<ConversationView>,
     },
-    History {
-        view: Entity<ThreadHistoryView>,
-    },
+}
+
+impl From<AgentThread> for BaseView {
+    fn from(thread: AgentThread) -> Self {
+        BaseView::AgentThread {
+            conversation_view: thread.conversation_view,
+        }
+    }
+}
+
+enum OverlayView {
+    History { view: Entity<ThreadHistoryView> },
     Configuration,
 }
 
+enum VisibleSurface<'a> {
+    Uninitialized,
+    AgentThread(&'a Entity<ConversationView>),
+    History(&'a Entity<ThreadHistoryView>),
+    Configuration(Option<&'a Entity<AgentConfiguration>>),
+}
+
 enum WhichFontSize {
     AgentFont,
     None,
@@ -785,13 +830,17 @@ enum WorktreeCreationArgs {
     },
 }
 
-impl ActiveView {
+impl BaseView {
+    pub fn which_font_size_used(&self) -> WhichFontSize {
+        WhichFontSize::AgentFont
+    }
+}
+
+impl OverlayView {
     pub fn which_font_size_used(&self) -> WhichFontSize {
         match self {
-            ActiveView::Uninitialized
-            | ActiveView::AgentThread { .. }
-            | ActiveView::History { .. } => WhichFontSize::AgentFont,
-            ActiveView::Configuration => WhichFontSize::None,
+            OverlayView::History { .. } => WhichFontSize::AgentFont,
+            OverlayView::Configuration => WhichFontSize::None,
         }
     }
 }
@@ -811,10 +860,9 @@ pub struct AgentPanel {
     configuration: Option<Entity<AgentConfiguration>>,
     configuration_subscription: Option<Subscription>,
     focus_handle: FocusHandle,
-    active_view: ActiveView,
-    previous_view: Option<ActiveView>,
-    background_threads: HashMap<acp::SessionId, Entity<ConversationView>>,
-    draft_threads: HashMap<DraftId, Entity<ConversationView>>,
+    base_view: BaseView,
+    overlay_view: Option<OverlayView>,
+    retained_threads: HashMap<ThreadId, Entity<ConversationView>>,
     new_thread_menu_handle: PopoverMenuHandle<ContextMenu>,
     start_thread_in_menu_handle: PopoverMenuHandle<ThreadWorktreePicker>,
     thread_branch_menu_handle: PopoverMenuHandle<ThreadBranchPicker>,
@@ -832,12 +880,13 @@ pub struct AgentPanel {
     agent_layout_onboarding_dismissed: AtomicBool,
     selected_agent: Agent,
     start_thread_in: StartThreadIn,
+    pending_thread_loads: usize,
     worktree_creation_status: Option<(EntityId, WorktreeCreationStatus)>,
     _thread_view_subscription: Option<Subscription>,
     _active_thread_focus_subscription: Option<Subscription>,
     _worktree_creation_task: Option<Task<()>>,
     show_trust_workspace_message: bool,
-    _active_view_observation: Option<Subscription>,
+    _base_view_observation: Option<Subscription>,
 }
 
 impl AgentPanel {
@@ -913,17 +962,20 @@ impl AgentPanel {
                 .and_then(|p| p.last_active_thread.as_ref())
             {
                 let session_id = acp::SessionId::new(thread_info.session_id.clone());
-                let has_metadata = cx
+                let is_restorable = cx
                     .update(|_window, cx| {
                         let store = ThreadMetadataStore::global(cx);
-                        store.read(cx).entry(&session_id).is_some()
+                        store
+                            .read(cx)
+                            .entry_by_session(&session_id)
+                            .is_some_and(|entry| !entry.archived)
                     })
                     .unwrap_or(false);
-                if has_metadata {
+                if is_restorable {
                     Some(thread_info)
                 } else {
-                    log::warn!(
-                        "last active thread {} has no metadata, skipping restoration",
+                    log::info!(
+                        "last active thread {} is archived or missing, skipping restoration",
                         thread_info.session_id
                     );
                     None
@@ -1019,7 +1071,7 @@ impl AgentPanel {
 
         let thread_store = ThreadStore::global(cx);
 
-        let active_view = ActiveView::Uninitialized;
+        let base_view = BaseView::Uninitialized;
 
         let weak_panel = cx.entity().downgrade();
 
@@ -1179,7 +1231,8 @@ impl AgentPanel {
 
         let mut panel = Self {
             workspace_id,
-            active_view,
+            base_view,
+            overlay_view: None,
             workspace,
             user_store,
             project: project.clone(),
@@ -1191,9 +1244,7 @@ impl AgentPanel {
             configuration_subscription: None,
             focus_handle: cx.focus_handle(),
             context_server_registry,
-            previous_view: None,
-            background_threads: HashMap::default(),
-            draft_threads: HashMap::default(),
+            retained_threads: HashMap::default(),
             new_thread_menu_handle: PopoverMenuHandle::default(),
             start_thread_in_menu_handle: PopoverMenuHandle::default(),
             thread_branch_menu_handle: PopoverMenuHandle::default(),
@@ -1210,6 +1261,7 @@ impl AgentPanel {
             thread_store,
             selected_agent: Agent::default(),
             start_thread_in: StartThreadIn::default(),
+            pending_thread_loads: 0,
             worktree_creation_status: None,
             _thread_view_subscription: None,
             _active_thread_focus_subscription: None,
@@ -1219,7 +1271,7 @@ impl AgentPanel {
             agent_layout_onboarding_dismissed: AtomicBool::new(AgentLayoutOnboarding::dismissed(
                 cx,
             )),
-            _active_view_observation: None,
+            _base_view_observation: None,
         };
 
         // Initial sync of agent servers from extensions
@@ -1310,84 +1362,142 @@ impl AgentPanel {
             .unwrap_or(false)
     }
 
-    /// Reset the panel to the uninitialized state, clearing any active
-    /// thread without creating a new draft. Running threads are retained
-    /// in the background. The sidebar suppresses the uninitialized state
-    /// so no "Draft" entry appears.
+    /// Clear any active conversation while preserving a real empty draft.
+    /// Running non-draft threads are retained in the background.
     pub fn clear_active_thread(&mut self, window: &mut Window, cx: &mut Context<Self>) {
-        self.set_active_view(ActiveView::Uninitialized, false, window, cx);
+        self.show_or_create_empty_draft(window, cx);
+    }
+
+    fn show_or_create_empty_draft(&mut self, window: &mut Window, cx: &mut Context<Self>) {
+        if self.active_thread_is_draft(cx) {
+            self.clear_overlay_state();
+            self.refresh_base_view_subscriptions(window, cx);
+            self.serialize(cx);
+            cx.emit(AgentPanelEvent::ActiveViewChanged);
+            cx.notify();
+            return;
+        }
+
+        if let Some(draft_id) = self.draft_thread_ids(cx).into_iter().next() {
+            self.activate_retained_thread(draft_id, false, window, cx);
+            cx.notify();
+            return;
+        }
+
+        let id = self.create_thread(window, cx);
+        self.activate_retained_thread(id, false, window, cx);
+        cx.notify();
     }
 
     pub fn new_thread(&mut self, _action: &NewThread, window: &mut Window, cx: &mut Context<Self>) {
-        let id = self.create_draft(window, cx);
-        self.activate_draft(id, true, window, cx);
+        let id = self.create_thread(window, cx);
+        self.activate_retained_thread(id, true, window, cx);
     }
 
-    /// Creates a new empty draft thread and stores it. Returns the DraftId.
-    /// The draft is NOT activated — call `activate_draft` to show it.
-    pub fn create_draft(&mut self, window: &mut Window, cx: &mut Context<Self>) -> DraftId {
-        let id = DraftId::next(cx);
-        let workspace = self.workspace.clone();
-        let project = self.project.clone();
-        let fs = self.fs.clone();
-        let thread_store = self.thread_store.clone();
+    pub fn create_thread(&mut self, window: &mut Window, cx: &mut Context<Self>) -> ThreadId {
         let agent = if self.project.read(cx).is_via_collab() {
             Agent::NativeAgent
         } else {
             self.selected_agent.clone()
         };
-        let server = agent.server(fs, thread_store);
-        let conversation_view = self.create_agent_thread(
-            server, None, None, None, None, workspace, project, agent, window, cx,
-        );
-        self.draft_threads.insert(id, conversation_view);
-        id
+        let thread = self.create_agent_thread(agent, None, None, None, None, window, cx);
+        let thread_id = thread.conversation_view.read(cx).thread_id;
+        self.retained_threads
+            .insert(thread_id, thread.conversation_view);
+        thread_id
     }
 
-    pub fn activate_draft(
+    pub fn activate_retained_thread(
         &mut self,
-        id: DraftId,
+        id: ThreadId,
         focus: bool,
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        let Some(conversation_view) = self.draft_threads.get(&id).cloned() else {
+        let Some(conversation_view) = self.retained_threads.remove(&id) else {
             return;
         };
-        self.set_active_view(
-            ActiveView::AgentThread { conversation_view },
+        self.set_base_view(
+            BaseView::AgentThread { conversation_view },
             focus,
             window,
             cx,
         );
     }
 
-    /// Removes a draft thread. If it's currently active, does nothing to
-    /// the active view — the caller should activate something else first.
-    pub fn remove_draft(&mut self, id: DraftId) {
-        self.draft_threads.remove(&id);
+    pub fn remove_thread(&mut self, id: ThreadId, cx: &mut Context<Self>) {
+        self.retained_threads.remove(&id);
+        ThreadMetadataStore::global(cx).update(cx, |store, cx| {
+            store.delete(id, cx);
+        });
+
+        if self.active_thread_id(cx) == Some(id) && self.active_thread_is_draft(cx) {
+            self.base_view = BaseView::Uninitialized;
+            self.clear_overlay_state();
+            self._thread_view_subscription = None;
+            self._active_thread_focus_subscription = None;
+            self._base_view_observation = None;
+            self.serialize(cx);
+            cx.emit(AgentPanelEvent::ActiveViewChanged);
+            cx.notify();
+        }
     }
 
-    /// Returns the DraftId of the currently active draft, if the active
-    /// view is a draft thread tracked in `draft_threads`.
-    pub fn active_draft_id(&self) -> Option<DraftId> {
-        let active_cv = self.active_conversation_view()?;
-        self.draft_threads
-            .iter()
-            .find_map(|(id, cv)| (cv.entity_id() == active_cv.entity_id()).then_some(*id))
+    pub fn active_thread_id(&self, cx: &App) -> Option<ThreadId> {
+        match &self.base_view {
+            BaseView::AgentThread { conversation_view } => {
+                Some(conversation_view.read(cx).thread_id)
+            }
+            _ => None,
+        }
     }
 
-    /// Returns all draft IDs, sorted newest-first.
-    pub fn draft_ids(&self) -> Vec<DraftId> {
-        let mut ids: Vec<DraftId> = self.draft_threads.keys().copied().collect();
-        ids.sort_by_key(|id| std::cmp::Reverse(id.0));
+    pub fn draft_thread_ids(&self, cx: &App) -> Vec<ThreadId> {
+        let is_draft = |cv: &Entity<ConversationView>| -> bool {
+            let cv = cv.read(cx);
+            match cv.root_thread(cx) {
+                Some(tv) => tv.read(cx).is_draft(cx),
+                None => cv.is_new_draft(),
+            }
+        };
+
+        let mut ids: Vec<ThreadId> = self
+            .retained_threads
+            .iter()
+            .filter(|(_, cv)| is_draft(cv))
+            .map(|(id, _)| *id)
+            .collect();
+
+        if let BaseView::AgentThread { conversation_view } = &self.base_view {
+            let thread_id = conversation_view.read(cx).thread_id;
+            if is_draft(conversation_view) && !ids.contains(&thread_id) {
+                ids.push(thread_id);
+            }
+        }
+
+        if let Some(store) = ThreadMetadataStore::try_global(cx) {
+            let store = store.read(cx);
+            ids.sort_by(|a, b| {
+                let a_time = store.entry(*a).and_then(|m| m.created_at);
+                let b_time = store.entry(*b).and_then(|m| m.created_at);
+                b_time.cmp(&a_time)
+            });
+        }
         ids
     }
 
-    /// Returns the text from a draft's message editor, or `None` if the
-    /// draft doesn't exist or has no text.
-    pub fn draft_editor_text(&self, id: DraftId, cx: &App) -> Option<String> {
-        let cv = self.draft_threads.get(&id)?;
+    pub fn editor_text(&self, id: ThreadId, cx: &App) -> Option<String> {
+        let cv = self
+            .retained_threads
+            .get(&id)
+            .or_else(|| match &self.base_view {
+                BaseView::AgentThread { conversation_view }
+                    if conversation_view.read(cx).thread_id == id =>
+                {
+                    Some(conversation_view)
+                }
+                _ => None,
+            })?;
         let tv = cv.read(cx).active_thread()?;
         let text = tv.read(cx).message_editor.read(cx).text(cx);
         if text.trim().is_empty() {
@@ -1397,11 +1507,19 @@ impl AgentPanel {
         }
     }
 
-    /// Clears the message editor text of a tracked draft.
-    pub fn clear_draft_editor(&self, id: DraftId, window: &mut Window, cx: &mut Context<Self>) {
-        let Some(cv) = self.draft_threads.get(&id) else {
-            return;
-        };
+    pub fn clear_editor(&self, id: ThreadId, window: &mut Window, cx: &mut Context<Self>) {
+        let cv = self
+            .retained_threads
+            .get(&id)
+            .or_else(|| match &self.base_view {
+                BaseView::AgentThread { conversation_view }
+                    if conversation_view.read(cx).thread_id == id =>
+                {
+                    Some(conversation_view)
+                }
+                _ => None,
+            });
+        let Some(cv) = cv else { return };
         let Some(tv) = cv.read(cx).active_thread() else {
             return;
         };
@@ -1411,7 +1529,7 @@ impl AgentPanel {
         });
     }
 
-    fn take_active_draft_initial_content(
+    fn take_active_initial_content(
         &mut self,
         cx: &mut Context<Self>,
     ) -> Option<AgentInitialContent> {
@@ -1496,11 +1614,6 @@ impl AgentPanel {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        let workspace = self.workspace.clone();
-        let project = self.project.clone();
-        let fs = self.fs.clone();
-        let thread_store = self.thread_store.clone();
-
         let agent = agent_choice.unwrap_or_else(|| {
             if self.project.read(cx).is_via_collab() {
                 Agent::NativeAgent
@@ -1508,26 +1621,16 @@ impl AgentPanel {
                 self.selected_agent.clone()
             }
         });
-
-        let server = agent.server(fs, thread_store);
-        let conversation_view = self.create_agent_thread(
-            server,
+        let thread = self.create_agent_thread(
+            agent,
             resume_session_id,
             work_dirs,
             title,
             initial_content,
-            workspace,
-            project,
-            agent,
-            window,
-            cx,
-        );
-        self.set_active_view(
-            ActiveView::AgentThread { conversation_view },
-            focus,
             window,
             cx,
         );
+        self.set_base_view(thread.into(), focus, window, cx);
     }
 
     fn deploy_rules_library(
@@ -1619,32 +1722,23 @@ impl AgentPanel {
     }
 
     fn open_history(&mut self, window: &mut Window, cx: &mut Context<Self>) {
+        if matches!(self.overlay_view, Some(OverlayView::History { .. })) {
+            self.clear_overlay(true, window, cx);
+            return;
+        }
+
         let Some(view) = self.history_for_selected_agent(window, cx) else {
             return;
         };
 
-        if let ActiveView::History { view: active_view } = &self.active_view {
-            if active_view == &view {
-                if let Some(previous_view) = self.previous_view.take() {
-                    self.set_active_view(previous_view, true, window, cx);
-                }
-                return;
-            }
-        }
-
-        self.set_active_view(ActiveView::History { view }, true, window, cx);
+        self.set_overlay(OverlayView::History { view }, true, window, cx);
         cx.notify();
     }
 
     pub fn go_back(&mut self, _: &workspace::GoBack, window: &mut Window, cx: &mut Context<Self>) {
-        match self.active_view {
-            ActiveView::Configuration | ActiveView::History { .. } => {
-                if let Some(previous_view) = self.previous_view.take() {
-                    self.set_active_view(previous_view, true, window, cx);
-                }
-                cx.notify();
-            }
-            _ => {}
+        if self.overlay_view.is_some() {
+            self.clear_overlay(true, window, cx);
+            cx.notify();
         }
     }
 
@@ -1697,7 +1791,7 @@ impl AgentPanel {
     }
 
     fn handle_font_size_action(&mut self, persist: bool, delta: Pixels, cx: &mut Context<Self>) {
-        match self.active_view.which_font_size_used() {
+        match self.visible_font_size() {
             WhichFontSize::AgentFont => {
                 if persist {
                     update_settings_file(self.fs.clone(), cx, move |settings, cx| {
@@ -1757,11 +1851,15 @@ impl AgentPanel {
     }
 
     pub(crate) fn open_configuration(&mut self, window: &mut Window, cx: &mut Context<Self>) {
+        if matches!(self.overlay_view, Some(OverlayView::Configuration)) {
+            self.clear_overlay(true, window, cx);
+            return;
+        }
+
         let agent_server_store = self.project.read(cx).agent_server_store().clone();
         let context_server_store = self.project.read(cx).context_server_store();
         let fs = self.fs.clone();
 
-        self.set_active_view(ActiveView::Configuration, true, window, cx);
         self.configuration = Some(cx.new(|cx| {
             AgentConfiguration::new(
                 fs,
@@ -1782,7 +1880,11 @@ impl AgentPanel {
                 window,
                 Self::handle_agent_configuration_event,
             ));
+        }
+
+        self.set_overlay(OverlayView::Configuration, true, window, cx);
 
+        if let Some(configuration) = self.configuration.as_ref() {
             configuration.focus_handle(cx).focus(window, cx);
         }
     }
@@ -1990,13 +2092,13 @@ impl AgentPanel {
         self.workspace_id
     }
 
-    pub fn background_threads(&self) -> &HashMap<acp::SessionId, Entity<ConversationView>> {
-        &self.background_threads
+    pub fn retained_threads(&self) -> &HashMap<ThreadId, Entity<ConversationView>> {
+        &self.retained_threads
     }
 
     pub fn active_conversation_view(&self) -> Option<&Entity<ConversationView>> {
-        match &self.active_view {
-            ActiveView::AgentThread { conversation_view } => Some(conversation_view),
+        match &self.base_view {
+            BaseView::AgentThread { conversation_view } => Some(conversation_view),
             _ => None,
         }
     }
@@ -2005,7 +2107,7 @@ impl AgentPanel {
         self.active_conversation_view()
             .into_iter()
             .cloned()
-            .chain(self.background_threads.values().cloned())
+            .chain(self.retained_threads.values().cloned())
             .collect()
     }
 
@@ -2015,10 +2117,8 @@ impl AgentPanel {
     }
 
     pub fn active_agent_thread(&self, cx: &App) -> Option<Entity<AcpThread>> {
-        match &self.active_view {
-            ActiveView::AgentThread {
-                conversation_view, ..
-            } => conversation_view
+        match &self.base_view {
+            BaseView::AgentThread { conversation_view } => conversation_view
                 .read(cx)
                 .active_thread()
                 .map(|r| r.read(cx).thread.clone()),
@@ -2026,21 +2126,22 @@ impl AgentPanel {
         }
     }
 
-    /// Returns the primary thread views for all retained connections: the
-    pub fn is_background_thread(&self, session_id: &acp::SessionId) -> bool {
-        self.background_threads.contains_key(session_id)
+    pub fn is_retained_thread(&self, id: &ThreadId) -> bool {
+        self.retained_threads.contains_key(id)
     }
 
-    pub fn cancel_thread(&self, session_id: &acp::SessionId, cx: &mut Context<Self>) -> bool {
+    pub fn cancel_thread(&self, thread_id: &ThreadId, cx: &mut Context<Self>) -> bool {
         let conversation_views = self
             .active_conversation_view()
             .into_iter()
-            .chain(self.background_threads.values());
+            .chain(self.retained_threads.values());
 
         for conversation_view in conversation_views {
-            if let Some(thread_view) = conversation_view.read(cx).thread_view(session_id) {
-                thread_view.update(cx, |view, cx| view.cancel_generation(cx));
-                return true;
+            if *thread_id == conversation_view.read(cx).thread_id {
+                if let Some(thread_view) = conversation_view.read(cx).root_thread_view(cx) {
+                    thread_view.update(cx, |view, cx| view.cancel_generation(cx));
+                    return true;
+                }
             }
         }
         false
@@ -2057,7 +2158,7 @@ impl AgentPanel {
             }
         }
 
-        for server_view in self.background_threads.values() {
+        for server_view in self.retained_threads.values() {
             if let Some(thread_view) = server_view.read(cx).root_thread(cx) {
                 views.push(thread_view);
             }
@@ -2068,6 +2169,7 @@ impl AgentPanel {
 
     fn update_thread_work_dirs(&self, cx: &mut Context<Self>) {
         let new_work_dirs = self.project.read(cx).default_path_list(cx);
+        let new_worktree_paths = self.project.read(cx).worktree_paths(cx);
 
         if let Some(conversation_view) = self.active_conversation_view() {
             conversation_view.update(cx, |conversation_view, cx| {
@@ -2075,48 +2177,100 @@ impl AgentPanel {
             });
         }
 
-        for conversation_view in self.background_threads.values() {
+        for conversation_view in self.retained_threads.values() {
             conversation_view.update(cx, |conversation_view, cx| {
                 conversation_view.set_work_dirs(new_work_dirs.clone(), cx);
             });
         }
+
+        // Update metadata store so threads' path lists stay in sync with
+        // the project's current worktrees. Without this, threads saved
+        // before a worktree was added would have stale paths and not
+        // appear under the correct sidebar group.
+        let mut thread_ids: Vec<ThreadId> = self.retained_threads.keys().copied().collect();
+        if let Some(active_id) = self.active_thread_id(cx) {
+            thread_ids.push(active_id);
+        }
+        if !thread_ids.is_empty() {
+            ThreadMetadataStore::global(cx).update(cx, |store, cx| {
+                store.update_worktree_paths(&thread_ids, new_worktree_paths, cx);
+            });
+        }
     }
 
-    fn retain_running_thread(&mut self, old_view: ActiveView, cx: &mut Context<Self>) {
-        let ActiveView::AgentThread { conversation_view } = old_view else {
+    fn retain_running_thread(&mut self, old_view: BaseView, cx: &mut Context<Self>) {
+        let BaseView::AgentThread { conversation_view } = old_view else {
             return;
         };
 
-        // If this ConversationView is a tracked draft, it's already
-        // stored in `draft_threads` — don't drop it.
-        let is_tracked_draft = self
-            .draft_threads
-            .values()
-            .any(|cv| cv.entity_id() == conversation_view.entity_id());
-        if is_tracked_draft {
+        let thread_id = conversation_view.read(cx).thread_id;
+
+        if self.retained_threads.contains_key(&thread_id) {
             return;
         }
 
-        let Some(thread_view) = conversation_view.read(cx).root_thread(cx) else {
-            return;
+        let is_empty_draft = match conversation_view.read(cx).root_thread(cx) {
+            Some(tv) => tv.read(cx).is_draft(cx),
+            None => conversation_view.read(cx).is_new_draft(),
         };
-
-        if thread_view.read(cx).thread.read(cx).entries().is_empty() {
+        if is_empty_draft {
+            ThreadMetadataStore::global(cx).update(cx, |store, cx| {
+                store.delete(thread_id, cx);
+            });
             return;
         }
 
-        self.background_threads
-            .insert(thread_view.read(cx).id.clone(), conversation_view);
-        self.cleanup_background_threads(cx);
+        self.retained_threads.insert(thread_id, conversation_view);
+        self.cleanup_retained_threads(cx);
+    }
+
+    fn remove_empty_draft(&mut self, cx: &mut Context<Self>) {
+        let draft_ids: Vec<ThreadId> = self
+            .retained_threads
+            .iter()
+            .filter(|(_, cv)| match cv.read(cx).root_thread(cx) {
+                Some(tv) => tv.read(cx).is_draft(cx),
+                None => cv.read(cx).is_new_draft(),
+            })
+            .map(|(id, _)| *id)
+            .collect();
+        for id in draft_ids {
+            self.retained_threads.remove(&id);
+            ThreadMetadataStore::global(cx).update(cx, |store, cx| {
+                store.delete(id, cx);
+            });
+        }
+
+        // Also clean up orphaned draft metadata in the store for this
+        // panel's worktree paths (e.g. from a previously removed workspace).
+        let path_list = {
+            let project = self.project.read(cx);
+            let worktree_paths = project.worktree_paths(cx);
+            worktree_paths.main_worktree_path_list().clone()
+        };
+        if let Some(store) = ThreadMetadataStore::try_global(cx) {
+            let orphaned: Vec<ThreadId> = {
+                let store = store.read(cx);
+                store
+                    .entries_for_path(&path_list)
+                    .chain(store.entries_for_main_worktree_path(&path_list))
+                    .filter(|entry| entry.is_draft())
+                    .map(|entry| entry.thread_id)
+                    .collect()
+            };
+            if !orphaned.is_empty() {
+                store.update(cx, |store, cx| {
+                    for id in orphaned {
+                        store.delete(id, cx);
+                    }
+                });
+            }
+        }
     }
 
-    /// We keep threads that are:
-    /// - Still running
-    /// - Do not support reloading the full session
-    /// - Have had the most recent events (up to 5 idle threads)
-    fn cleanup_background_threads(&mut self, cx: &App) {
+    fn cleanup_retained_threads(&mut self, cx: &App) {
         let mut potential_removals = self
-            .background_threads
+            .retained_threads
             .iter()
             .filter(|(_id, view)| {
                 let Some(thread_view) = view.read(cx).root_thread(cx) else {
@@ -2127,65 +2281,42 @@ impl AgentPanel {
             })
             .collect::<Vec<_>>();
 
-        const MAX_IDLE_BACKGROUND_THREADS: usize = 5;
+        let max_idle = MaxIdleRetainedThreads::global(cx);
 
         potential_removals.sort_unstable_by_key(|(_, view)| view.read(cx).updated_at(cx));
-        let n = potential_removals
-            .len()
-            .saturating_sub(MAX_IDLE_BACKGROUND_THREADS);
+        let n = potential_removals.len().saturating_sub(max_idle);
         let to_remove = potential_removals
             .into_iter()
-            .map(|(id, _)| id.clone())
+            .map(|(id, _)| *id)
             .take(n)
             .collect::<Vec<_>>();
         for id in to_remove {
-            self.background_threads.remove(&id);
+            self.retained_threads.remove(&id);
         }
     }
 
     pub(crate) fn active_native_agent_thread(&self, cx: &App) -> Option<Entity<agent::Thread>> {
-        match &self.active_view {
-            ActiveView::AgentThread {
-                conversation_view, ..
-            } => conversation_view.read(cx).as_native_thread(cx),
+        match &self.base_view {
+            BaseView::AgentThread { conversation_view } => {
+                conversation_view.read(cx).as_native_thread(cx)
+            }
             _ => None,
         }
     }
 
-    fn set_active_view(
+    fn set_base_view(
         &mut self,
-        new_view: ActiveView,
+        new_view: BaseView,
         focus: bool,
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        let was_in_agent_history = matches!(self.active_view, ActiveView::History { .. });
-        let current_is_uninitialized = matches!(self.active_view, ActiveView::Uninitialized);
-        let current_is_history = matches!(self.active_view, ActiveView::History { .. });
-        let new_is_history = matches!(new_view, ActiveView::History { .. });
-
-        let current_is_config = matches!(self.active_view, ActiveView::Configuration);
-        let new_is_config = matches!(new_view, ActiveView::Configuration);
+        self.clear_overlay_state();
 
-        let current_is_overlay = current_is_history || current_is_config;
-        let new_is_overlay = new_is_history || new_is_config;
-
-        if current_is_uninitialized || (current_is_overlay && !new_is_overlay) {
-            self.active_view = new_view;
-        } else if !current_is_overlay && new_is_overlay {
-            self.previous_view = Some(std::mem::replace(&mut self.active_view, new_view));
-        } else {
-            let old_view = std::mem::replace(&mut self.active_view, new_view);
-            if !new_is_overlay {
-                if let Some(previous) = self.previous_view.take() {
-                    self.retain_running_thread(previous, cx);
-                }
-            }
-            self.retain_running_thread(old_view, cx);
-        }
+        let old_view = std::mem::replace(&mut self.base_view, new_view);
+        self.retain_running_thread(old_view, cx);
 
-        // Keep the toolbar's selected agent in sync with the active thread's agent.
-        if let ActiveView::AgentThread { conversation_view } = &self.active_view {
+        if let BaseView::AgentThread { conversation_view } = &self.base_view {
             let thread_agent = conversation_view.read(cx).agent_key().clone();
             if self.selected_agent != thread_agent {
                 self.selected_agent = thread_agent;
@@ -2193,12 +2324,57 @@ impl AgentPanel {
             }
         }
 
-        // Subscribe to the active ThreadView's events (e.g. FirstSendRequested)
-        // so the panel can intercept the first send for worktree creation.
-        // Re-subscribe whenever the ConnectionView changes, since the inner
-        // ThreadView may have been replaced (e.g. navigating between threads).
-        self._active_view_observation = match &self.active_view {
-            ActiveView::AgentThread { conversation_view } => {
+        self.refresh_base_view_subscriptions(window, cx);
+
+        if focus {
+            self.focus_handle(cx).focus(window, cx);
+        }
+        cx.emit(AgentPanelEvent::ActiveViewChanged);
+    }
+
+    fn set_overlay(
+        &mut self,
+        overlay: OverlayView,
+        focus: bool,
+        window: &mut Window,
+        cx: &mut Context<Self>,
+    ) {
+        let was_in_history = matches!(self.overlay_view, Some(OverlayView::History { .. }));
+        self.overlay_view = Some(overlay);
+
+        if let Some(OverlayView::History { view }) = &self.overlay_view
+            && !was_in_history
+        {
+            view.update(cx, |view, cx| {
+                view.history()
+                    .update(cx, |history, cx| history.refresh_full_history(cx))
+            });
+        }
+
+        if focus {
+            self.focus_handle(cx).focus(window, cx);
+        }
+        cx.emit(AgentPanelEvent::ActiveViewChanged);
+    }
+
+    fn clear_overlay(&mut self, focus: bool, window: &mut Window, cx: &mut Context<Self>) {
+        self.clear_overlay_state();
+
+        if focus {
+            self.focus_handle(cx).focus(window, cx);
+        }
+        cx.emit(AgentPanelEvent::ActiveViewChanged);
+    }
+
+    fn clear_overlay_state(&mut self) {
+        self.overlay_view = None;
+        self.configuration_subscription = None;
+        self.configuration = None;
+    }
+
+    fn refresh_base_view_subscriptions(&mut self, window: &mut Window, cx: &mut Context<Self>) {
+        self._base_view_observation = match &self.base_view {
+            BaseView::AgentThread { conversation_view } => {
                 self._thread_view_subscription =
                     Self::subscribe_to_active_thread_view(conversation_view, window, cx);
                 let focus_handle = conversation_view.focus_handle(cx);
@@ -2219,26 +2395,46 @@ impl AgentPanel {
                     },
                 ))
             }
-            _ => {
+            BaseView::Uninitialized => {
                 self._thread_view_subscription = None;
                 self._active_thread_focus_subscription = None;
                 None
             }
         };
+        self.serialize(cx);
+    }
 
-        if let ActiveView::History { view } = &self.active_view {
-            if !was_in_agent_history {
-                view.update(cx, |view, cx| {
-                    view.history()
-                        .update(cx, |history, cx| history.refresh_full_history(cx))
-                });
-            }
+    fn visible_surface(&self) -> VisibleSurface<'_> {
+        if let Some(overlay_view) = &self.overlay_view {
+            return match overlay_view {
+                OverlayView::History { view } => VisibleSurface::History(view),
+                OverlayView::Configuration => {
+                    VisibleSurface::Configuration(self.configuration.as_ref())
+                }
+            };
         }
 
-        if focus {
-            self.focus_handle(cx).focus(window, cx);
+        match &self.base_view {
+            BaseView::Uninitialized => VisibleSurface::Uninitialized,
+            BaseView::AgentThread { conversation_view } => {
+                VisibleSurface::AgentThread(conversation_view)
+            }
         }
-        cx.emit(AgentPanelEvent::ActiveViewChanged);
+    }
+
+    fn is_overlay_open(&self) -> bool {
+        self.overlay_view.is_some()
+    }
+
+    fn is_history_or_configuration_visible(&self) -> bool {
+        self.is_overlay_open()
+    }
+
+    fn visible_font_size(&self) -> WhichFontSize {
+        self.overlay_view.as_ref().map_or_else(
+            || self.base_view.which_font_size_used(),
+            OverlayView::which_font_size_used,
+        )
     }
 
     fn populate_recently_updated_menu_section(

crates/agent_ui/src/agent_ui.rs 🔗

@@ -65,9 +65,12 @@ use std::any::TypeId;
 use workspace::Workspace;
 
 use crate::agent_configuration::{ConfigureContextServerModal, ManageProfilesModal};
-pub use crate::agent_panel::{AgentPanel, AgentPanelEvent, DraftId, WorktreeCreationStatus};
+pub use crate::agent_panel::{
+    AgentPanel, AgentPanelEvent, MaxIdleRetainedThreads, WorktreeCreationStatus,
+};
 use crate::agent_registry_ui::AgentRegistryPage;
 pub use crate::inline_assistant::InlineAssistant;
+pub use crate::thread_metadata_store::ThreadId;
 pub use agent_diff::{AgentDiffPane, AgentDiffToolbar};
 pub use conversation_view::ConversationView;
 pub use external_source_prompt::ExternalSourcePrompt;
@@ -79,7 +82,7 @@ pub(crate) use thread_history_view::*;
 pub use thread_import::{AcpThreadImportOnboarding, ThreadImportModal};
 use zed_actions;
 
-pub const DEFAULT_THREAD_TITLE: &str = "New Thread";
+pub const DEFAULT_THREAD_TITLE: &str = "New Agent Thread";
 const PARALLEL_AGENT_LAYOUT_BACKFILL_KEY: &str = "parallel_agent_layout_backfilled";
 actions!(
     agent,
@@ -823,6 +826,7 @@ mod tests {
         .unwrap();
 
         cx.update(|cx| {
+            cx.set_global(db::AppDatabase::test_new());
             let store = SettingsStore::test(cx);
             cx.set_global(store);
             AgentSettings::register(cx);

crates/agent_ui/src/conversation_view.rs 🔗

@@ -82,6 +82,7 @@ use crate::entry_view_state::{EntryViewEvent, ViewEvent};
 use crate::message_editor::{MessageEditor, MessageEditorEvent};
 use crate::profile_selector::{ProfileProvider, ProfileSelector};
 
+use crate::thread_metadata_store::ThreadId;
 use crate::ui::{AgentNotification, AgentNotificationEvent};
 use crate::{
     Agent, AgentDiffPane, AgentInitialContent, AgentPanel, AllowAlways, AllowOnce,
@@ -246,44 +247,46 @@ pub(crate) struct Conversation {
 impl Conversation {
     pub fn register_thread(&mut self, thread: Entity<AcpThread>, cx: &mut Context<Self>) {
         let session_id = thread.read(cx).session_id().clone();
-        let subscription = cx.subscribe(&thread, move |this, _thread, event, _cx| {
-            this.updated_at = Some(Instant::now());
-            match event {
-                AcpThreadEvent::ToolAuthorizationRequested(id) => {
-                    this.permission_requests
-                        .entry(session_id.clone())
-                        .or_default()
-                        .push(id.clone());
-                }
-                AcpThreadEvent::ToolAuthorizationReceived(id) => {
-                    if let Some(tool_calls) = this.permission_requests.get_mut(&session_id) {
-                        tool_calls.retain(|tool_call_id| tool_call_id != id);
-                        if tool_calls.is_empty() {
-                            this.permission_requests.shift_remove(&session_id);
+        let subscription = cx.subscribe(&thread, {
+            let session_id = session_id.clone();
+            move |this, _thread, event, _cx| {
+                this.updated_at = Some(Instant::now());
+                match event {
+                    AcpThreadEvent::ToolAuthorizationRequested(id) => {
+                        this.permission_requests
+                            .entry(session_id.clone())
+                            .or_default()
+                            .push(id.clone());
+                    }
+                    AcpThreadEvent::ToolAuthorizationReceived(id) => {
+                        if let Some(tool_calls) = this.permission_requests.get_mut(&session_id) {
+                            tool_calls.retain(|tool_call_id| tool_call_id != id);
+                            if tool_calls.is_empty() {
+                                this.permission_requests.shift_remove(&session_id);
+                            }
                         }
                     }
+                    AcpThreadEvent::NewEntry
+                    | AcpThreadEvent::TitleUpdated
+                    | AcpThreadEvent::TokenUsageUpdated
+                    | AcpThreadEvent::EntryUpdated(_)
+                    | AcpThreadEvent::EntriesRemoved(_)
+                    | AcpThreadEvent::Retry(_)
+                    | AcpThreadEvent::SubagentSpawned(_)
+                    | AcpThreadEvent::Stopped(_)
+                    | AcpThreadEvent::Error
+                    | AcpThreadEvent::LoadError(_)
+                    | AcpThreadEvent::PromptCapabilitiesUpdated
+                    | AcpThreadEvent::Refusal
+                    | AcpThreadEvent::AvailableCommandsUpdated(_)
+                    | AcpThreadEvent::ModeUpdated(_)
+                    | AcpThreadEvent::ConfigOptionsUpdated(_)
+                    | AcpThreadEvent::WorkingDirectoriesUpdated => {}
                 }
-                AcpThreadEvent::NewEntry
-                | AcpThreadEvent::TitleUpdated
-                | AcpThreadEvent::TokenUsageUpdated
-                | AcpThreadEvent::EntryUpdated(_)
-                | AcpThreadEvent::EntriesRemoved(_)
-                | AcpThreadEvent::Retry(_)
-                | AcpThreadEvent::SubagentSpawned(_)
-                | AcpThreadEvent::Stopped(_)
-                | AcpThreadEvent::Error
-                | AcpThreadEvent::LoadError(_)
-                | AcpThreadEvent::PromptCapabilitiesUpdated
-                | AcpThreadEvent::Refusal
-                | AcpThreadEvent::AvailableCommandsUpdated(_)
-                | AcpThreadEvent::ModeUpdated(_)
-                | AcpThreadEvent::ConfigOptionsUpdated(_)
-                | AcpThreadEvent::WorkingDirectoriesUpdated => {}
             }
         });
         self.subscriptions.push(subscription);
-        self.threads
-            .insert(thread.read(cx).session_id().clone(), thread);
+        self.threads.insert(session_id, thread);
     }
 
     pub fn pending_tool_call<'a>(
@@ -293,25 +296,21 @@ impl Conversation {
     ) -> Option<(acp::SessionId, acp::ToolCallId, &'a PermissionOptions)> {
         let thread = self.threads.get(session_id)?;
         let is_subagent = thread.read(cx).parent_session_id().is_some();
-        let (thread, tool_id) = if is_subagent {
+        let (result_session_id, thread, tool_id) = if is_subagent {
             let id = self.permission_requests.get(session_id)?.iter().next()?;
-            (thread, id)
+            (session_id.clone(), thread, id)
         } else {
             let (id, tool_calls) = self.permission_requests.first()?;
             let thread = self.threads.get(id)?;
-            let id = tool_calls.iter().next()?;
-            (thread, id)
+            let tool_id = tool_calls.iter().next()?;
+            (id.clone(), thread, tool_id)
         };
         let (_, tool_call) = thread.read(cx).tool_call(tool_id)?;
 
         let ToolCallStatus::WaitingForConfirmation { options, .. } = &tool_call.status else {
             return None;
         };
-        Some((
-            thread.read(cx).session_id().clone(),
-            tool_id.clone(),
-            options,
-        ))
+        Some((result_session_id, tool_id.clone(), options))
     }
 
     pub fn subagents_awaiting_permission(&self, cx: &App) -> Vec<(acp::SessionId, usize)> {
@@ -334,10 +333,11 @@ impl Conversation {
         kind: acp::PermissionOptionKind,
         cx: &mut Context<Self>,
     ) -> Option<()> {
-        let (_, tool_call_id, options) = self.pending_tool_call(session_id, cx)?;
+        let (authorize_session_id, tool_call_id, options) =
+            self.pending_tool_call(session_id, cx)?;
         let option = options.first_option_of_kind(kind)?;
         self.authorize_tool_call(
-            session_id.clone(),
+            authorize_session_id,
             tool_call_id,
             SelectedPermissionOutcome::new(option.option_id.clone(), option.kind),
             cx,
@@ -356,6 +356,7 @@ impl Conversation {
             return;
         };
         let agent_telemetry_id = thread.read(cx).connection().telemetry_id();
+        let session_id = thread.read(cx).session_id().clone();
 
         telemetry::event!(
             "Agent Tool Call Authorized",
@@ -379,6 +380,34 @@ impl Conversation {
     }
 }
 
+pub(crate) struct RootThreadUpdated;
+
+impl EventEmitter<RootThreadUpdated> for ConversationView {}
+
+fn affects_thread_metadata(event: &AcpThreadEvent) -> bool {
+    match event {
+        AcpThreadEvent::NewEntry
+        | AcpThreadEvent::TitleUpdated
+        | AcpThreadEvent::EntryUpdated(_)
+        | AcpThreadEvent::EntriesRemoved(_)
+        | AcpThreadEvent::ToolAuthorizationRequested(_)
+        | AcpThreadEvent::ToolAuthorizationReceived(_)
+        | AcpThreadEvent::Retry(_)
+        | AcpThreadEvent::Stopped(_)
+        | AcpThreadEvent::Error
+        | AcpThreadEvent::LoadError(_)
+        | AcpThreadEvent::Refusal
+        | AcpThreadEvent::WorkingDirectoriesUpdated => true,
+        // --
+        AcpThreadEvent::TokenUsageUpdated
+        | AcpThreadEvent::PromptCapabilitiesUpdated
+        | AcpThreadEvent::AvailableCommandsUpdated(_)
+        | AcpThreadEvent::ModeUpdated(_)
+        | AcpThreadEvent::ConfigOptionsUpdated(_)
+        | AcpThreadEvent::SubagentSpawned(_) => false,
+    }
+}
+
 pub enum AcpServerViewEvent {
     ActiveThreadChanged,
 }
@@ -394,12 +423,17 @@ pub struct ConversationView {
     project: Entity<Project>,
     thread_store: Option<Entity<ThreadStore>>,
     prompt_store: Option<Entity<PromptStore>>,
+    pub(crate) thread_id: ThreadId,
+    root_session_id: Option<acp::SessionId>,
     server_state: ServerState,
     focus_handle: FocusHandle,
     notifications: Vec<WindowHandle<AgentNotification>>,
     notification_subscriptions: HashMap<WindowHandle<AgentNotification>, Vec<Subscription>>,
     auth_task: Option<Task<()>>,
     _subscriptions: Vec<Subscription>,
+    /// True when this conversation was created as a new draft (no resume
+    /// session). False when resuming an existing session from history.
+    is_new_draft: bool,
 }
 
 impl ConversationView {
@@ -409,6 +443,10 @@ impl ConversationView {
         })
     }
 
+    pub fn is_new_draft(&self) -> bool {
+        self.is_new_draft
+    }
+
     pub fn active_thread(&self) -> Option<&Entity<ThreadView>> {
         match &self.server_state {
             ServerState::Connected(connected) => connected.active_view(),
@@ -420,23 +458,29 @@ impl ConversationView {
         &'a self,
         cx: &'a App,
     ) -> Option<(acp::SessionId, acp::ToolCallId, &'a PermissionOptions)> {
-        let id = &self.active_thread()?.read(cx).id;
+        let session_id = self
+            .active_thread()?
+            .read(cx)
+            .thread
+            .read(cx)
+            .session_id()
+            .clone();
         self.as_connected()?
             .conversation
             .read(cx)
-            .pending_tool_call(id, cx)
+            .pending_tool_call(&session_id, cx)
     }
 
     pub fn root_thread_has_pending_tool_call(&self, cx: &App) -> bool {
         let Some(root_thread) = self.root_thread(cx) else {
             return false;
         };
-        let root_id = root_thread.read(cx).id.clone();
+        let root_session_id = root_thread.read(cx).thread.read(cx).session_id().clone();
         self.as_connected().is_some_and(|connected| {
             connected
                 .conversation
                 .read(cx)
-                .pending_tool_call(&root_id, cx)
+                .pending_tool_call(&root_session_id, cx)
                 .is_some()
         })
     }
@@ -445,8 +489,10 @@ impl ConversationView {
         match &self.server_state {
             ServerState::Connected(connected) => {
                 let mut current = connected.active_view()?;
-                while let Some(parent_id) = current.read(cx).parent_id.clone() {
-                    if let Some(parent) = connected.threads.get(&parent_id) {
+                while let Some(parent_session_id) =
+                    current.read(cx).thread.read(cx).parent_session_id()
+                {
+                    if let Some(parent) = connected.threads.get(parent_session_id) {
                         current = parent;
                     } else {
                         break;
@@ -458,7 +504,28 @@ impl ConversationView {
         }
     }
 
-    pub fn thread_view(&self, session_id: &acp::SessionId) -> Option<Entity<ThreadView>> {
+    pub(crate) fn root_acp_thread(&self, cx: &App) -> Option<Entity<AcpThread>> {
+        let connected = self.as_connected()?;
+        let root_session_id = self.root_session_id.as_ref()?;
+        connected
+            .conversation
+            .read(cx)
+            .threads
+            .get(root_session_id)
+            .cloned()
+    }
+
+    pub fn root_thread_view(&self, cx: &App) -> Option<Entity<ThreadView>> {
+        self.root_session_id
+            .as_ref()
+            .and_then(|sid| self.thread_view(sid, cx))
+    }
+
+    pub fn thread_view(
+        &self,
+        session_id: &acp::SessionId,
+        _cx: &App,
+    ) -> Option<Entity<ThreadView>> {
         let connected = self.as_connected()?;
         connected.threads.get(session_id).cloned()
     }
@@ -482,7 +549,7 @@ impl ConversationView {
             .and_then(|connected| connected.conversation.read(cx).updated_at)
     }
 
-    pub fn navigate_to_session(
+    pub fn navigate_to_thread(
         &mut self,
         session_id: acp::SessionId,
         window: &mut Window,
@@ -492,7 +559,7 @@ impl ConversationView {
             return;
         };
 
-        connected.navigate_to_session(session_id);
+        connected.navigate_to_thread(session_id);
         if let Some(view) = self.active_thread() {
             view.focus_handle(cx).focus(window, cx);
         }
@@ -510,11 +577,8 @@ impl ConversationView {
 }
 
 enum ServerState {
-    Loading(Entity<LoadingView>),
-    LoadError {
-        error: LoadError,
-        session_id: Option<acp::SessionId>,
-    },
+    Loading { _loading: Entity<LoadingView> },
+    LoadError { error: LoadError },
     Connected(ConnectedServerState),
 }
 
@@ -547,7 +611,6 @@ impl AuthState {
 }
 
 struct LoadingView {
-    session_id: Option<acp::SessionId>,
     _load_task: Task<()>,
 }
 
@@ -561,16 +624,17 @@ impl ConnectedServerState {
             .map_or(false, |view| view.read(cx).thread_error.is_some())
     }
 
-    pub fn navigate_to_session(&mut self, session_id: acp::SessionId) {
+    pub fn navigate_to_thread(&mut self, session_id: acp::SessionId) {
         if self.threads.contains_key(&session_id) {
             self.active_id = Some(session_id);
         }
     }
 
     pub fn close_all_sessions(&self, cx: &mut App) -> Task<()> {
-        let tasks = self.threads.keys().filter_map(|id| {
+        let tasks = self.threads.values().filter_map(|view| {
             if self.connection.supports_close_session() {
-                Some(self.connection.clone().close_session(id, cx))
+                let session_id = view.read(cx).thread.read(cx).session_id().clone();
+                Some(self.connection.clone().close_session(&session_id, cx))
             } else {
                 None
             }
@@ -588,6 +652,7 @@ impl ConversationView {
         connection_store: Entity<AgentConnectionStore>,
         connection_key: Agent,
         resume_session_id: Option<acp::SessionId>,
+        thread_id: Option<ThreadId>,
         work_dirs: Option<PathList>,
         title: Option<SharedString>,
         initial_content: Option<AgentInitialContent>,
@@ -623,6 +688,9 @@ impl ConversationView {
         })
         .detach();
 
+        let is_new_draft = resume_session_id.is_none();
+        let thread_id = thread_id.unwrap_or_else(ThreadId::new);
+
         Self {
             agent: agent.clone(),
             connection_store: connection_store.clone(),
@@ -632,6 +700,8 @@ impl ConversationView {
             project: project.clone(),
             thread_store,
             prompt_store,
+            thread_id,
+            root_session_id: None,
             server_state: Self::initial_state(
                 agent.clone(),
                 connection_store,
@@ -649,6 +719,7 @@ impl ConversationView {
             auth_task: None,
             _subscriptions: subscriptions,
             focus_handle: cx.focus_handle(),
+            is_new_draft,
         }
     }
 
@@ -666,7 +737,8 @@ impl ConversationView {
         let (resume_session_id, cwd, title) = self
             .active_thread()
             .map(|thread_view| {
-                let thread = thread_view.read(cx).thread.read(cx);
+                let tv = thread_view.read(cx);
+                let thread = tv.thread.read(cx);
                 (
                     Some(thread.session_id().clone()),
                     thread.work_dirs().cloned(),
@@ -718,7 +790,6 @@ impl ConversationView {
                 error: LoadError::Other(
                     "External agents are not yet supported in shared projects.".into(),
                 ),
-                session_id: resume_session_id.clone(),
             };
         }
         let session_work_dirs = work_dirs.unwrap_or_else(|| project.read(cx).default_path_list(cx));
@@ -741,7 +812,6 @@ impl ConversationView {
 
         let connect_result = connection_entry.read(cx).wait_for_connection();
 
-        let load_session_id = resume_session_id.clone();
         let load_task = cx.spawn_in(window, async move |this, cx| {
             let (connection, history) = match connect_result.await {
                 Ok(AgentConnectedState {
@@ -750,7 +820,7 @@ impl ConversationView {
                 }) => (connection, history),
                 Err(err) => {
                     this.update_in(cx, |this, window, cx| {
-                        this.handle_load_error(load_session_id.clone(), err, window, cx);
+                        this.handle_load_error(err, window, cx);
                         cx.notify();
                     })
                     .log_err();
@@ -761,7 +831,7 @@ impl ConversationView {
             telemetry::event!("Agent Thread Started", agent = connection.telemetry_id());
 
             let mut resumed_without_history = false;
-            let result = if let Some(session_id) = load_session_id.clone() {
+            let result = if let Some(session_id) = resume_session_id.clone() {
                 cx.update(|_, cx| {
                     if connection.supports_load_session() {
                         connection.clone().load_session(
@@ -824,6 +894,8 @@ impl ConversationView {
             this.update_in(cx, |this, window, cx| {
                 match result {
                     Ok(thread) => {
+                        let root_session_id = thread.read(cx).session_id().clone();
+
                         let conversation = cx.new(|cx| {
                             let mut conversation = Conversation::default();
                             conversation.register_thread(thread.clone(), cx);
@@ -831,7 +903,6 @@ impl ConversationView {
                         });
 
                         let current = this.new_thread_view(
-                            None,
                             thread,
                             conversation.clone(),
                             resumed_without_history,
@@ -849,13 +920,13 @@ impl ConversationView {
                                 .focus(window, cx);
                         }
 
-                        let id = current.read(cx).thread.read(cx).session_id().clone();
+                        this.root_session_id = Some(root_session_id.clone());
                         this.set_server_state(
                             ServerState::Connected(ConnectedServerState {
                                 connection,
                                 auth_state: AuthState::Ok,
-                                active_id: Some(id.clone()),
-                                threads: HashMap::from_iter([(id, current)]),
+                                active_id: Some(root_session_id.clone()),
+                                threads: HashMap::from_iter([(root_session_id, current)]),
                                 conversation,
                                 history,
                                 _connection_entry_subscription: connection_entry_subscription,
@@ -865,7 +936,6 @@ impl ConversationView {
                     }
                     Err(err) => {
                         this.handle_load_error(
-                            load_session_id.clone(),
                             LoadError::Other(err.to_string().into()),
                             window,
                             cx,
@@ -877,16 +947,16 @@ impl ConversationView {
         });
 
         let loading_view = cx.new(|_cx| LoadingView {
-            session_id: resume_session_id,
             _load_task: load_task,
         });
 
-        ServerState::Loading(loading_view)
+        ServerState::Loading {
+            _loading: loading_view,
+        }
     }
 
     fn new_thread_view(
         &self,
-        parent_id: Option<acp::SessionId>,
         thread: Entity<AcpThread>,
         conversation: Entity<Conversation>,
         resumed_without_history: bool,
@@ -990,7 +1060,6 @@ impl ConversationView {
             cx.observe(&action_log, |_, _, cx| cx.notify()),
         ];
 
-        let parent_session_id = thread.read(cx).session_id().clone();
         let subagent_sessions = thread
             .read(cx)
             .entries()
@@ -1005,6 +1074,7 @@ impl ConversationView {
             .collect::<Vec<_>>();
 
         if !subagent_sessions.is_empty() {
+            let parent_session_id = thread.read(cx).session_id().clone();
             cx.spawn_in(window, async move |this, cx| {
                 this.update_in(cx, |this, window, cx| {
                     for subagent_id in subagent_sessions {
@@ -1058,7 +1128,6 @@ impl ConversationView {
         let weak = cx.weak_entity();
         cx.new(|cx| {
             ThreadView::new(
-                parent_id,
                 thread,
                 conversation,
                 weak,
@@ -1170,13 +1239,7 @@ impl ConversationView {
         .ok();
     }
 
-    fn handle_load_error(
-        &mut self,
-        session_id: Option<acp::SessionId>,
-        err: LoadError,
-        window: &mut Window,
-        cx: &mut Context<Self>,
-    ) {
+    fn handle_load_error(&mut self, err: LoadError, window: &mut Window, cx: &mut Context<Self>) {
         if let Some(view) = self.active_thread() {
             if view
                 .read(cx)
@@ -1188,13 +1251,7 @@ impl ConversationView {
             }
         }
         self.emit_load_error_telemetry(&err);
-        self.set_server_state(
-            ServerState::LoadError {
-                error: err,
-                session_id,
-            },
-            cx,
-        );
+        self.set_server_state(ServerState::LoadError { error: err }, cx);
     }
 
     fn handle_agent_servers_updated(
@@ -1208,7 +1265,7 @@ impl ConversationView {
         // when agent.connect() fails during loading), retry loading the thread.
         // This handles the case where a thread is restored before authentication completes.
         let should_retry = match &self.server_state {
-            ServerState::Loading(_) => false,
+            ServerState::Loading { .. } => false,
             ServerState::LoadError { .. } => true,
             ServerState::Connected(connected) => {
                 connected.auth_state.is_ok() && connected.has_thread_error(cx)
@@ -1239,7 +1296,7 @@ impl ConversationView {
                 .active_view()
                 .and_then(|v| v.read(cx).thread.read(cx).title())
                 .unwrap_or_else(|| DEFAULT_THREAD_TITLE.into()),
-            ServerState::Loading(_) => "Loading…".into(),
+            ServerState::Loading { .. } => "Loading…".into(),
             ServerState::LoadError { error, .. } => match error {
                 LoadError::Unsupported { .. } => {
                     format!("Upgrade {}", self.agent.agent_id()).into()
@@ -1261,15 +1318,8 @@ impl ConversationView {
         }
     }
 
-    // The parent ID is None if we haven't created a thread yet
-    pub fn parent_id(&self, cx: &App) -> Option<acp::SessionId> {
-        match &self.server_state {
-            ServerState::Connected(_) => self
-                .root_thread(cx)
-                .map(|thread| thread.read(cx).id.clone()),
-            ServerState::Loading(loading) => loading.read(cx).session_id.clone(),
-            ServerState::LoadError { session_id, .. } => session_id.clone(),
-        }
+    pub fn parent_id(&self) -> ThreadId {
+        self.thread_id
     }
 
     pub fn is_loading(&self) -> bool {
@@ -1326,13 +1376,22 @@ impl ConversationView {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        let thread_id = thread.read(cx).session_id().clone();
+        let session_id = thread.read(cx).session_id().clone();
+        let has_thread = self
+            .as_connected()
+            .is_some_and(|connected| connected.threads.contains_key(&session_id));
+        if !has_thread {
+            return;
+        };
         let is_subagent = thread.read(cx).parent_session_id().is_some();
+        if !is_subagent && affects_thread_metadata(event) {
+            cx.emit(RootThreadUpdated);
+        }
         match event {
             AcpThreadEvent::NewEntry => {
                 let len = thread.read(cx).entries().len();
                 let index = len - 1;
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     let entry_view_state = active.read(cx).entry_view_state.clone();
                     let list_state = active.read(cx).list_state.clone();
                     entry_view_state.update(cx, |view_state, cx| {
@@ -1350,7 +1409,7 @@ impl ConversationView {
                 }
             }
             AcpThreadEvent::EntryUpdated(index) => {
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     let entry_view_state = active.read(cx).entry_view_state.clone();
                     let list_state = active.read(cx).list_state.clone();
                     entry_view_state.update(cx, |view_state, cx| {
@@ -1363,7 +1422,7 @@ impl ConversationView {
                 }
             }
             AcpThreadEvent::EntriesRemoved(range) => {
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     let entry_view_state = active.read(cx).entry_view_state.clone();
                     let list_state = active.read(cx).list_state.clone();
                     entry_view_state.update(cx, |view_state, _cx| view_state.remove(range.clone()));
@@ -1373,25 +1432,22 @@ impl ConversationView {
                     });
                 }
             }
-            AcpThreadEvent::SubagentSpawned(session_id) => self.load_subagent_session(
-                session_id.clone(),
-                thread.read(cx).session_id().clone(),
-                window,
-                cx,
-            ),
+            AcpThreadEvent::SubagentSpawned(subagent_session_id) => {
+                self.load_subagent_session(subagent_session_id.clone(), session_id, window, cx)
+            }
             AcpThreadEvent::ToolAuthorizationRequested(_) => {
                 self.notify_with_sound("Waiting for tool confirmation", IconName::Info, window, cx);
             }
             AcpThreadEvent::ToolAuthorizationReceived(_) => {}
             AcpThreadEvent::Retry(retry) => {
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     active.update(cx, |active, _cx| {
                         active.thread_retry_status = Some(retry.clone());
                     });
                 }
             }
             AcpThreadEvent::Stopped(stop_reason) => {
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     let is_generating =
                         matches!(thread.read(cx).status(), ThreadStatus::Generating);
                     active.update(cx, |active, cx| {
@@ -1455,7 +1511,7 @@ impl ConversationView {
             }
             AcpThreadEvent::Refusal => {
                 let error = ThreadError::Refusal;
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     active.update(cx, |active, cx| {
                         active.handle_thread_error(error, cx);
                         active.thread_retry_status.take();
@@ -1469,7 +1525,7 @@ impl ConversationView {
                 }
             }
             AcpThreadEvent::Error => {
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     let is_generating =
                         matches!(thread.read(cx).status(), ThreadStatus::Generating);
                     active.update(cx, |active, cx| {
@@ -1505,14 +1561,13 @@ impl ConversationView {
                 self.set_server_state(
                     ServerState::LoadError {
                         error: error.clone(),
-                        session_id: Some(thread_id),
                     },
                     cx,
                 );
             }
             AcpThreadEvent::TitleUpdated => {
                 if let Some(title) = thread.read(cx).title()
-                    && let Some(active_thread) = self.thread_view(&thread_id)
+                    && let Some(active_thread) = self.thread_view(&session_id, cx)
                 {
                     let title_editor = active_thread.read(cx).title_editor.clone();
                     title_editor.update(cx, |editor, cx| {
@@ -1524,7 +1579,7 @@ impl ConversationView {
                 cx.notify();
             }
             AcpThreadEvent::PromptCapabilitiesUpdated => {
-                if let Some(active) = self.thread_view(&thread_id) {
+                if let Some(active) = self.thread_view(&session_id, cx) {
                     active.update(cx, |active, _cx| {
                         active
                             .session_capabilities
@@ -1538,7 +1593,7 @@ impl ConversationView {
                 self.emit_token_limit_telemetry_if_needed(thread, cx);
             }
             AcpThreadEvent::AvailableCommandsUpdated(available_commands) => {
-                if let Some(thread_view) = self.thread_view(&thread_id) {
+                if let Some(thread_view) = self.thread_view(&session_id, cx) {
                     let has_commands = !available_commands.is_empty();
 
                     let agent_display_name = self
@@ -1716,7 +1771,7 @@ impl ConversationView {
     fn load_subagent_session(
         &mut self,
         subagent_id: acp::SessionId,
-        parent_id: acp::SessionId,
+        parent_session_id: acp::SessionId,
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
@@ -1728,7 +1783,7 @@ impl ConversationView {
         {
             return;
         }
-        let Some(parent_thread) = connected.threads.get(&parent_id) else {
+        let Some(parent_thread) = connected.threads.get(&parent_session_id) else {
             return;
         };
         let work_dirs = parent_thread
@@ -1740,7 +1795,7 @@ impl ConversationView {
             .unwrap_or_else(|| self.project.read(cx).default_path_list(cx));
 
         let subagent_thread_task = connected.connection.clone().load_session(
-            subagent_id.clone(),
+            subagent_id,
             self.project.clone(),
             work_dirs,
             None,
@@ -1756,11 +1811,11 @@ impl ConversationView {
                 else {
                     return;
                 };
+                let subagent_session_id = subagent_thread.read(cx).session_id().clone();
                 conversation.update(cx, |conversation, cx| {
                     conversation.register_thread(subagent_thread.clone(), cx);
                 });
                 let view = this.new_thread_view(
-                    Some(parent_id),
                     subagent_thread,
                     conversation,
                     false,
@@ -1772,7 +1827,7 @@ impl ConversationView {
                 let Some(connected) = this.as_connected_mut() else {
                     return;
                 };
-                connected.threads.insert(subagent_id, view);
+                connected.threads.insert(subagent_session_id, view);
             })
         })
         .detach();
@@ -3095,6 +3150,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     workspace.downgrade(),
                     project,
                     Some(thread_store),
@@ -3230,6 +3286,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     workspace.downgrade(),
                     project,
                     Some(thread_store),
@@ -3307,6 +3364,7 @@ pub(crate) mod tests {
                     connection_store,
                     Agent::Custom { id: "Test".into() },
                     Some(SessionId::new("session-1")),
+                    None,
                     Some(PathList::new(&[PathBuf::from("/project/subdir")])),
                     None,
                     None,
@@ -3382,7 +3440,7 @@ pub(crate) mod tests {
                 other => panic!(
                     "Expected LoadError::Other, got: {}",
                     match other {
-                        ServerState::Loading(_) => "Loading (stuck!)",
+                        ServerState::Loading { .. } => "Loading (stuck!)",
                         ServerState::LoadError { .. } => "LoadError (wrong variant)",
                         ServerState::Connected(_) => "Connected",
                     }
@@ -3631,6 +3689,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     workspace.downgrade(),
                     project.clone(),
                     Some(thread_store),
@@ -3742,6 +3801,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     workspace1.downgrade(),
                     project1.clone(),
                     Some(thread_store),
@@ -3987,6 +4047,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     initial_content,
                     workspace.downgrade(),
                     project,
@@ -4757,6 +4818,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     workspace.downgrade(),
                     project.clone(),
                     Some(thread_store.clone()),
@@ -6769,6 +6831,7 @@ pub(crate) mod tests {
         let project = Project::test(fs, [], cx).await;
         let connection: Rc<dyn AgentConnection> = Rc::new(StubAgentConnection::new());
 
+        let session_id = acp::SessionId::new("session-1");
         let (thread, conversation) = cx.update(|cx| {
             let thread =
                 create_test_acp_thread(None, "session-1", connection.clone(), project.clone(), cx);
@@ -6784,7 +6847,6 @@ pub(crate) mod tests {
         let _task2 = request_test_tool_authorization(&thread, "tc-2", "allow-2", cx);
 
         cx.read(|cx| {
-            let session_id = acp::SessionId::new("session-1");
             let (_, tool_call_id, _) = conversation
                 .read(cx)
                 .pending_tool_call(&session_id, cx)
@@ -6795,7 +6857,7 @@ pub(crate) mod tests {
         cx.update(|cx| {
             conversation.update(cx, |conversation, cx| {
                 conversation.authorize_tool_call(
-                    acp::SessionId::new("session-1"),
+                    session_id.clone(),
                     acp::ToolCallId::new("tc-1"),
                     SelectedPermissionOutcome::new(
                         acp::PermissionOptionId::new("allow-1"),
@@ -6809,7 +6871,6 @@ pub(crate) mod tests {
         cx.run_until_parked();
 
         cx.read(|cx| {
-            let session_id = acp::SessionId::new("session-1");
             let (_, tool_call_id, _) = conversation
                 .read(cx)
                 .pending_tool_call(&session_id, cx)
@@ -6820,7 +6881,7 @@ pub(crate) mod tests {
         cx.update(|cx| {
             conversation.update(cx, |conversation, cx| {
                 conversation.authorize_tool_call(
-                    acp::SessionId::new("session-1"),
+                    session_id.clone(),
                     acp::ToolCallId::new("tc-2"),
                     SelectedPermissionOutcome::new(
                         acp::PermissionOptionId::new("allow-2"),
@@ -6834,7 +6895,6 @@ pub(crate) mod tests {
         cx.run_until_parked();
 
         cx.read(|cx| {
-            let session_id = acp::SessionId::new("session-1");
             assert!(
                 conversation
                     .read(cx)
@@ -6853,6 +6913,8 @@ pub(crate) mod tests {
         let project = Project::test(fs, [], cx).await;
         let connection: Rc<dyn AgentConnection> = Rc::new(StubAgentConnection::new());
 
+        let parent_session_id = acp::SessionId::new("parent");
+        let subagent_session_id = acp::SessionId::new("subagent");
         let (parent_thread, subagent_thread, conversation) = cx.update(|cx| {
             let parent_thread =
                 create_test_acp_thread(None, "parent", connection.clone(), project.clone(), cx);
@@ -6880,24 +6942,22 @@ pub(crate) mod tests {
         // Querying with the subagent's session ID returns only the
         // subagent's own tool call (subagent path is scoped to its session)
         cx.read(|cx| {
-            let subagent_id = acp::SessionId::new("subagent");
-            let (session_id, tool_call_id, _) = conversation
+            let (returned_session_id, tool_call_id, _) = conversation
                 .read(cx)
-                .pending_tool_call(&subagent_id, cx)
+                .pending_tool_call(&subagent_session_id, cx)
                 .expect("Expected subagent's pending tool call");
-            assert_eq!(session_id, acp::SessionId::new("subagent"));
+            assert_eq!(returned_session_id, subagent_session_id);
             assert_eq!(tool_call_id, acp::ToolCallId::new("subagent-tc"));
         });
 
         // Querying with the parent's session ID returns the first pending
         // request in FIFO order across all sessions
         cx.read(|cx| {
-            let parent_id = acp::SessionId::new("parent");
-            let (session_id, tool_call_id, _) = conversation
+            let (returned_session_id, tool_call_id, _) = conversation
                 .read(cx)
-                .pending_tool_call(&parent_id, cx)
+                .pending_tool_call(&parent_session_id, cx)
                 .expect("Expected a pending tool call from parent query");
-            assert_eq!(session_id, acp::SessionId::new("parent"));
+            assert_eq!(returned_session_id, parent_session_id);
             assert_eq!(tool_call_id, acp::ToolCallId::new("parent-tc"));
         });
     }
@@ -6912,6 +6972,8 @@ pub(crate) mod tests {
         let project = Project::test(fs, [], cx).await;
         let connection: Rc<dyn AgentConnection> = Rc::new(StubAgentConnection::new());
 
+        let session_id_a = acp::SessionId::new("thread-a");
+        let session_id_b = acp::SessionId::new("thread-b");
         let (thread_a, thread_b, conversation) = cx.update(|cx| {
             let thread_a =
                 create_test_acp_thread(None, "thread-a", connection.clone(), project.clone(), cx);
@@ -6932,26 +6994,23 @@ pub(crate) mod tests {
         // Both threads are non-subagent, so pending_tool_call always returns
         // the first entry from permission_requests (FIFO across all sessions)
         cx.read(|cx| {
-            let session_a = acp::SessionId::new("thread-a");
-            let (session_id, tool_call_id, _) = conversation
+            let (returned_session_id, tool_call_id, _) = conversation
                 .read(cx)
-                .pending_tool_call(&session_a, cx)
+                .pending_tool_call(&session_id_a, cx)
                 .expect("Expected a pending tool call");
-            assert_eq!(session_id, acp::SessionId::new("thread-a"));
+            assert_eq!(returned_session_id, session_id_a);
             assert_eq!(tool_call_id, acp::ToolCallId::new("tc-a"));
         });
 
         // Querying with thread-b also returns thread-a's tool call,
         // because non-subagent queries always use permission_requests.first()
         cx.read(|cx| {
-            let session_b = acp::SessionId::new("thread-b");
-            let (session_id, tool_call_id, _) = conversation
+            let (returned_session_id, tool_call_id, _) = conversation
                 .read(cx)
-                .pending_tool_call(&session_b, cx)
+                .pending_tool_call(&session_id_b, cx)
                 .expect("Expected a pending tool call from thread-b query");
             assert_eq!(
-                session_id,
-                acp::SessionId::new("thread-a"),
+                returned_session_id, session_id_a,
                 "Non-subagent queries always return the first pending request in FIFO order"
             );
             assert_eq!(tool_call_id, acp::ToolCallId::new("tc-a"));
@@ -6961,7 +7020,7 @@ pub(crate) mod tests {
         cx.update(|cx| {
             conversation.update(cx, |conversation, cx| {
                 conversation.authorize_tool_call(
-                    acp::SessionId::new("thread-a"),
+                    session_id_a.clone(),
                     acp::ToolCallId::new("tc-a"),
                     SelectedPermissionOutcome::new(
                         acp::PermissionOptionId::new("allow-a"),
@@ -6975,12 +7034,11 @@ pub(crate) mod tests {
         cx.run_until_parked();
 
         cx.read(|cx| {
-            let session_b = acp::SessionId::new("thread-b");
-            let (session_id, tool_call_id, _) = conversation
+            let (returned_session_id, tool_call_id, _) = conversation
                 .read(cx)
-                .pending_tool_call(&session_b, cx)
+                .pending_tool_call(&session_id_b, cx)
                 .expect("Expected thread-b's tool call after thread-a's was authorized");
-            assert_eq!(session_id, acp::SessionId::new("thread-b"));
+            assert_eq!(returned_session_id, session_id_b);
             assert_eq!(tool_call_id, acp::ToolCallId::new("tc-b"));
         });
     }
@@ -7092,6 +7150,7 @@ pub(crate) mod tests {
                     None,
                     None,
                     None,
+                    None,
                     workspace.downgrade(),
                     project,
                     Some(thread_store),
@@ -7183,12 +7242,12 @@ pub(crate) mod tests {
                     !connected.connection.supports_close_session(),
                     "StubAgentConnection should not support close"
                 );
-                let session_id = connected
+                let thread_view = connected
                     .threads
-                    .keys()
+                    .values()
                     .next()
-                    .expect("Should have at least one thread")
-                    .clone();
+                    .expect("Should have at least one thread");
+                let session_id = thread_view.read(cx).thread.read(cx).session_id().clone();
                 connected.connection.clone().close_session(&session_id, cx)
             })
             .await;

crates/agent_ui/src/conversation_view/thread_view.rs 🔗

@@ -10,6 +10,7 @@ use editor::actions::OpenExcerpts;
 
 use crate::StartThreadIn;
 use crate::message_editor::SharedSessionCapabilities;
+
 use gpui::{Corner, List};
 use heapless::Vec as ArrayVec;
 use language_model::{LanguageModelEffortLevel, Speed};
@@ -262,8 +263,8 @@ impl PermissionSelection {
 }
 
 pub struct ThreadView {
-    pub id: acp::SessionId,
-    pub parent_id: Option<acp::SessionId>,
+    pub session_id: acp::SessionId,
+    pub parent_session_id: Option<acp::SessionId>,
     pub thread: Entity<AcpThread>,
     pub(crate) conversation: Entity<super::Conversation>,
     pub server_view: WeakEntity<ConversationView>,
@@ -294,7 +295,7 @@ pub struct ThreadView {
     pub expanded_thinking_blocks: HashSet<(usize, usize)>,
     auto_expanded_thinking_block: Option<(usize, usize)>,
     user_toggled_thinking_blocks: HashSet<(usize, usize)>,
-    pub subagent_scroll_handles: RefCell<HashMap<agent_client_protocol::SessionId, ScrollHandle>>,
+    pub subagent_scroll_handles: RefCell<HashMap<acp::SessionId, ScrollHandle>>,
     pub edits_expanded: bool,
     pub plan_expanded: bool,
     pub queue_expanded: bool,
@@ -337,7 +338,7 @@ pub struct ThreadView {
 }
 impl Focusable for ThreadView {
     fn focus_handle(&self, cx: &App) -> FocusHandle {
-        if self.parent_id.is_some() {
+        if self.parent_session_id.is_some() {
             self.focus_handle.clone()
         } else {
             self.active_editor(cx).focus_handle(cx)
@@ -357,7 +358,6 @@ pub struct TurnFields {
 
 impl ThreadView {
     pub(crate) fn new(
-        parent_id: Option<acp::SessionId>,
         thread: Entity<AcpThread>,
         conversation: Entity<super::Conversation>,
         server_view: WeakEntity<ConversationView>,
@@ -383,7 +383,8 @@ impl ThreadView {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) -> Self {
-        let id = thread.read(cx).session_id().clone();
+        let session_id = thread.read(cx).session_id().clone();
+        let parent_session_id = thread.read(cx).parent_session_id().cloned();
 
         let has_commands = !session_capabilities.read().available_commands().is_empty();
         let placeholder = placeholder_text(agent_display_name.as_ref(), has_commands);
@@ -507,8 +508,8 @@ impl ThreadView {
             .unwrap_or_default();
 
         let mut this = Self {
-            id,
-            parent_id,
+            session_id,
+            parent_session_id,
             focus_handle: cx.focus_handle(),
             thread,
             conversation,
@@ -644,6 +645,10 @@ impl ThreadView {
         }
     }
 
+    pub fn is_draft(&self, cx: &App) -> bool {
+        self.thread.read(cx).entries().is_empty()
+    }
+
     pub(crate) fn as_native_connection(
         &self,
         cx: &App,
@@ -692,7 +697,7 @@ impl ThreadView {
     }
 
     fn is_subagent(&self) -> bool {
-        self.parent_id.is_some()
+        self.parent_session_id.is_some()
     }
 
     /// Returns the currently active editor, either for a message that is being
@@ -1739,8 +1744,9 @@ impl ThreadView {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) -> Option<()> {
+        let session_id = self.thread.read(cx).session_id().clone();
         self.conversation.update(cx, |conversation, cx| {
-            conversation.authorize_pending_tool_call(&self.id, kind, cx)
+            conversation.authorize_pending_tool_call(&session_id, kind, cx)
         })?;
         if self.should_be_following {
             self.workspace
@@ -1780,8 +1786,9 @@ impl ThreadView {
             _ => acp::PermissionOptionKind::AllowOnce,
         };
 
+        let session_id = self.thread.read(cx).session_id().clone();
         self.authorize_tool_call(
-            self.id.clone(),
+            session_id,
             tool_call_id,
             SelectedPermissionOutcome::new(option_id, option_kind),
             window,
@@ -1859,10 +1866,20 @@ impl ThreadView {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) -> Option<()> {
-        let (session_id, tool_call_id, options) =
-            self.conversation.read(cx).pending_tool_call(&self.id, cx)?;
+        let session_id = self.thread.read(cx).session_id().clone();
+        let (returned_session_id, tool_call_id, options) = self
+            .conversation
+            .read(cx)
+            .pending_tool_call(&session_id, cx)?;
         let options = options.clone();
-        self.authorize_with_granularity(session_id, tool_call_id, &options, is_allow, window, cx)
+        self.authorize_with_granularity(
+            returned_session_id,
+            tool_call_id,
+            &options,
+            is_allow,
+            window,
+            cx,
+        )
     }
 
     fn authorize_with_granularity(
@@ -2548,6 +2565,35 @@ impl ThreadView {
             )
     }
 
+    fn collect_subagent_items_for_sessions(
+        entries: &[AgentThreadEntry],
+        awaiting_session_ids: &[acp::SessionId],
+        cx: &App,
+    ) -> Vec<(SharedString, usize)> {
+        let tool_calls_by_session: HashMap<_, _> = entries
+            .iter()
+            .enumerate()
+            .filter_map(|(entry_ix, entry)| {
+                let AgentThreadEntry::ToolCall(tool_call) = entry else {
+                    return None;
+                };
+                let info = tool_call.subagent_session_info.as_ref()?;
+                let summary_text = tool_call.label.read(cx).source().to_string();
+                let subagent_summary = if summary_text.is_empty() {
+                    SharedString::from("Subagent")
+                } else {
+                    SharedString::from(summary_text)
+                };
+                Some((info.session_id.clone(), (subagent_summary, entry_ix)))
+            })
+            .collect();
+
+        awaiting_session_ids
+            .iter()
+            .filter_map(|session_id| tool_calls_by_session.get(session_id).cloned())
+            .collect()
+    }
+
     fn render_subagents_awaiting_permission(&self, cx: &Context<Self>) -> Option<AnyElement> {
         let awaiting = self.conversation.read(cx).subagents_awaiting_permission(cx);
 
@@ -2555,30 +2601,15 @@ impl ThreadView {
             return None;
         }
 
+        let awaiting_session_ids: Vec<_> = awaiting
+            .iter()
+            .map(|(session_id, _)| session_id.clone())
+            .collect();
+
         let thread = self.thread.read(cx);
         let entries = thread.entries();
-        let mut subagent_items: Vec<(SharedString, usize)> = Vec::new();
-
-        for (session_id, _) in &awaiting {
-            for (entry_ix, entry) in entries.iter().enumerate() {
-                if let AgentThreadEntry::ToolCall(tool_call) = entry {
-                    if let Some(info) = &tool_call.subagent_session_info {
-                        if &info.session_id == session_id {
-                            let subagent_summary: SharedString = {
-                                let summary_text = tool_call.label.read(cx).source().to_string();
-                                if !summary_text.is_empty() {
-                                    summary_text.into()
-                                } else {
-                                    "Subagent".into()
-                                }
-                            };
-                            subagent_items.push((subagent_summary, entry_ix));
-                            break;
-                        }
-                    }
-                }
-            }
-        }
+        let subagent_items =
+            Self::collect_subagent_items_for_sessions(entries, &awaiting_session_ids, cx);
 
         if subagent_items.is_empty() {
             return None;
@@ -3092,7 +3123,7 @@ impl ThreadView {
     }
 
     fn is_subagent_canceled_or_failed(&self, cx: &App) -> bool {
-        let Some(parent_session_id) = self.parent_id.as_ref() else {
+        let Some(parent_session_id) = self.parent_session_id.as_ref() else {
             return false;
         };
 
@@ -3100,7 +3131,7 @@ impl ThreadView {
 
         self.server_view
             .upgrade()
-            .and_then(|sv| sv.read(cx).thread_view(parent_session_id))
+            .and_then(|sv| sv.read(cx).thread_view(parent_session_id, cx))
             .is_some_and(|parent_view| {
                 parent_view
                     .read(cx)
@@ -3119,9 +3150,10 @@ impl ThreadView {
     }
 
     pub(crate) fn render_subagent_titlebar(&mut self, cx: &mut Context<Self>) -> Option<Div> {
-        let Some(parent_session_id) = self.parent_id.clone() else {
+        if self.parent_session_id.is_none() {
             return None;
-        };
+        }
+        let parent_session_id = self.thread.read(cx).parent_session_id()?.clone();
 
         let server_view = self.server_view.clone();
         let thread = self.thread.clone();
@@ -3189,7 +3221,7 @@ impl ThreadView {
                                         .tooltip(Tooltip::text("Minimize Subagent"))
                                         .on_click(move |_, window, cx| {
                                             let _ = server_view.update(cx, |server_view, cx| {
-                                                server_view.navigate_to_session(
+                                                server_view.navigate_to_thread(
                                                     parent_session_id.clone(),
                                                     window,
                                                     cx,
@@ -4690,7 +4722,7 @@ impl ThreadView {
             }
             AgentThreadEntry::ToolCall(tool_call) => self
                 .render_any_tool_call(
-                    &self.id,
+                    self.thread.read(cx).session_id(),
                     entry_ix,
                     tool_call,
                     &self.focus_handle(cx),
@@ -6124,7 +6156,7 @@ impl ThreadView {
             .when_some(confirmation_options, |this, options| {
                 let is_first = self.is_first_tool_call(active_session_id, &tool_call.id, cx);
                 this.child(self.render_permission_buttons(
-                    self.id.clone(),
+                    self.thread.read(cx).session_id().clone(),
                     is_first,
                     options,
                     entry_ix,
@@ -6146,7 +6178,8 @@ impl ThreadView {
             .read(cx)
             .pending_tool_call(active_session_id, cx)
             .map_or(false, |(pending_session_id, pending_tool_call_id, _)| {
-                self.id == pending_session_id && tool_call_id == &pending_tool_call_id
+                self.thread.read(cx).session_id() == &pending_session_id
+                    && tool_call_id == &pending_tool_call_id
             })
     }
 
@@ -6358,7 +6391,7 @@ impl ThreadView {
                         )
                     })
                     .child(self.render_permission_buttons(
-                        self.id.clone(),
+                        self.thread.read(cx).session_id().clone(),
                         self.is_first_tool_call(active_session_id, &tool_call.id, cx),
                         options,
                         entry_ix,
@@ -7117,10 +7150,10 @@ impl ThreadView {
                     })
                     .label_size(LabelSize::Small)
                     .on_click(cx.listener({
-                        let session_id = session_id.clone();
                         let tool_call_id = tool_call_id.clone();
                         let option_id = option.option_id.clone();
                         let option_kind = option.kind;
+                        let session_id = session_id.clone();
                         move |this, _, window, cx| {
                             this.authorize_tool_call(
                                 session_id.clone(),
@@ -7673,11 +7706,11 @@ impl ThreadView {
         window: &Window,
         cx: &Context<Self>,
     ) -> Div {
-        let subagent_thread_view = subagent_session_id.and_then(|id| {
+        let subagent_thread_view = subagent_session_id.and_then(|session_id| {
             self.server_view
                 .upgrade()
                 .and_then(|server_view| server_view.read(cx).as_connected())
-                .and_then(|connected| connected.threads.get(&id))
+                .and_then(|connected| connected.threads.get(&session_id))
         });
 
         let content = self.render_subagent_card(
@@ -7714,12 +7747,11 @@ impl ThreadView {
             .map(|log| log.read(cx).changed_buffers(cx))
             .unwrap_or_default();
 
-        let is_pending_tool_call = thread
+        let is_pending_tool_call = thread_view
             .as_ref()
-            .and_then(|thread| {
-                self.conversation
-                    .read(cx)
-                    .pending_tool_call(thread.read(cx).session_id(), cx)
+            .and_then(|tv| {
+                let sid = tv.read(cx).thread.read(cx).session_id();
+                self.conversation.read(cx).pending_tool_call(sid, cx)
             })
             .is_some();
 
@@ -7945,12 +7977,13 @@ impl ThreadView {
             )
             .when_some(thread_view, |this, thread_view| {
                 let thread = &thread_view.read(cx).thread;
+                let tv_session_id = thread.read(cx).session_id();
                 let pending_tool_call = self
                     .conversation
                     .read(cx)
-                    .pending_tool_call(thread.read(cx).session_id(), cx);
+                    .pending_tool_call(tv_session_id, cx);
 
-                let session_id = thread.read(cx).session_id().clone();
+                let nav_session_id = tv_session_id.clone();
 
                 let fullscreen_toggle = h_flex()
                     .id(entry_ix)
@@ -7972,7 +8005,7 @@ impl ThreadView {
                         telemetry::event!("Subagent Maximized");
                         this.server_view
                             .update(cx, |this, cx| {
-                                this.navigate_to_session(session_id.clone(), window, cx);
+                                this.navigate_to_thread(nav_session_id.clone(), window, cx);
                             })
                             .ok();
                     }));
@@ -8069,7 +8102,7 @@ impl ThreadView {
         let scroll_handle = self
             .subagent_scroll_handles
             .borrow_mut()
-            .entry(session_id.clone())
+            .entry(subagent_view.session_id.clone())
             .or_default()
             .clone();
 
@@ -8869,15 +8902,15 @@ impl Render for ThreadView {
             .key_context("AcpThread")
             .track_focus(&self.focus_handle)
             .on_action(cx.listener(|this, _: &menu::Cancel, _, cx| {
-                if this.parent_id.is_none() {
+                if this.parent_session_id.is_none() {
                     this.cancel_generation(cx);
                 }
             }))
             .on_action(cx.listener(|this, _: &workspace::GoBack, window, cx| {
-                if let Some(parent_session_id) = this.parent_id.clone() {
+                if let Some(parent_session_id) = this.thread.read(cx).parent_session_id().cloned() {
                     this.server_view
                         .update(cx, |view, cx| {
-                            view.navigate_to_session(parent_session_id, window, cx);
+                            view.navigate_to_thread(parent_session_id, window, cx);
                         })
                         .ok();
                 }

crates/agent_ui/src/test_support.rs 🔗

@@ -73,6 +73,9 @@ pub fn init_test(cx: &mut TestAppContext) {
     cx.update(|cx| {
         let settings_store = SettingsStore::test(cx);
         cx.set_global(settings_store);
+        cx.set_global(acp_thread::StubSessionCounter(
+            std::sync::atomic::AtomicUsize::new(0),
+        ));
         theme_settings::init(theme::LoadThemes::JustBase, cx);
         editor::init(cx);
         release_channel::init("0.0.0".parse().unwrap(), cx);
@@ -128,3 +131,10 @@ pub fn active_session_id(panel: &Entity<AgentPanel>, cx: &VisualTestContext) ->
         thread.read(cx).session_id().clone()
     })
 }
+
+pub fn active_thread_id(
+    panel: &Entity<AgentPanel>,
+    cx: &VisualTestContext,
+) -> crate::thread_metadata_store::ThreadId {
+    panel.read_with(cx, |panel, cx| panel.active_thread_id(cx).unwrap())
+}

crates/agent_ui/src/thread_history_view.rs 🔗

@@ -20,7 +20,7 @@ pub(crate) fn thread_title(entry: &AgentSessionInfo) -> SharedString {
     entry
         .title
         .clone()
-        .filter(|title| !title.is_empty())
+        .and_then(|title| if title.is_empty() { None } else { Some(title) })
         .unwrap_or_else(|| DEFAULT_THREAD_TITLE.into())
 }
 

crates/agent_ui/src/thread_import.rs 🔗

@@ -23,7 +23,7 @@ use workspace::{ModalView, MultiWorkspace, Workspace};
 use crate::{
     Agent, AgentPanel,
     agent_connection_store::AgentConnectionStore,
-    thread_metadata_store::{ThreadMetadata, ThreadMetadataStore, ThreadWorktreePaths},
+    thread_metadata_store::{ThreadId, ThreadMetadata, ThreadMetadataStore, WorktreePaths},
 };
 
 pub struct AcpThreadImportOnboarding;
@@ -206,10 +206,11 @@ impl ThreadImportModal {
             .filter(|agent_id| !self.unchecked_agents.contains(agent_id))
             .collect::<Vec<_>>();
 
-        let existing_sessions = ThreadMetadataStore::global(cx)
+        let existing_sessions: HashSet<acp::SessionId> = ThreadMetadataStore::global(cx)
             .read(cx)
-            .entry_ids()
-            .collect::<HashSet<_>>();
+            .entries()
+            .filter_map(|m| m.session_id.clone())
+            .collect();
 
         let task = find_threads_to_import(agent_ids, existing_sessions, stores, cx);
         cx.spawn(async move |this, cx| {
@@ -520,14 +521,13 @@ fn collect_importable_threads(
                 continue;
             };
             to_insert.push(ThreadMetadata {
-                session_id: session.session_id,
+                thread_id: ThreadId::new(),
+                session_id: Some(session.session_id),
                 agent_id: agent_id.clone(),
-                title: session
-                    .title
-                    .unwrap_or_else(|| crate::DEFAULT_THREAD_TITLE.into()),
+                title: session.title,
                 updated_at: session.updated_at.unwrap_or_else(|| Utc::now()),
                 created_at: session.created_at,
-                worktree_paths: ThreadWorktreePaths::from_folder_paths(&folder_paths),
+                worktree_paths: WorktreePaths::from_folder_paths(&folder_paths),
                 remote_connection: remote_connection.clone(),
                 archived: true,
             });
@@ -584,8 +584,8 @@ mod tests {
         let result = collect_importable_threads(sessions_by_agent, existing);
 
         assert_eq!(result.len(), 1);
-        assert_eq!(result[0].session_id.0.as_ref(), "new-1");
-        assert_eq!(result[0].title.as_ref(), "Brand New");
+        assert_eq!(result[0].session_id.as_ref().unwrap().0.as_ref(), "new-1");
+        assert_eq!(result[0].display_title(), "Brand New");
     }
 
     #[test]
@@ -605,7 +605,10 @@ mod tests {
         let result = collect_importable_threads(sessions_by_agent, existing);
 
         assert_eq!(result.len(), 1);
-        assert_eq!(result[0].session_id.0.as_ref(), "has-dirs");
+        assert_eq!(
+            result[0].session_id.as_ref().unwrap().0.as_ref(),
+            "has-dirs"
+        );
     }
 
     #[test]
@@ -657,11 +660,11 @@ mod tests {
         assert_eq!(result.len(), 2);
         let s1 = result
             .iter()
-            .find(|t| t.session_id.0.as_ref() == "s1")
+            .find(|t| t.session_id.as_ref().map(|s| s.0.as_ref()) == Some("s1"))
             .unwrap();
         let s2 = result
             .iter()
-            .find(|t| t.session_id.0.as_ref() == "s2")
+            .find(|t| t.session_id.as_ref().map(|s| s.0.as_ref()) == Some("s2"))
             .unwrap();
         assert_eq!(s1.agent_id.as_ref(), "agent-a");
         assert_eq!(s2.agent_id.as_ref(), "agent-b");
@@ -700,7 +703,10 @@ mod tests {
         let result = collect_importable_threads(sessions_by_agent, existing);
 
         assert_eq!(result.len(), 1);
-        assert_eq!(result[0].session_id.0.as_ref(), "shared-session");
+        assert_eq!(
+            result[0].session_id.as_ref().unwrap().0.as_ref(),
+            "shared-session"
+        );
         assert_eq!(
             result[0].agent_id.as_ref(),
             "agent-a",

crates/agent_ui/src/thread_metadata_store.rs 🔗

@@ -1,9 +1,5 @@
-use std::{
-    path::{Path, PathBuf},
-    sync::Arc,
-};
+use std::{path::PathBuf, sync::Arc};
 
-use acp_thread::AcpThreadEvent;
 use agent::{ThreadStore, ZED_AGENT_ID};
 use agent_client_protocol as acp;
 use anyhow::Context as _;
@@ -12,7 +8,9 @@ use collections::{HashMap, HashSet};
 use db::{
     kvp::KeyValueStore,
     sqlez::{
-        bindable::Column, domain::Domain, statement::Statement,
+        bindable::{Bind, Column},
+        domain::Domain,
+        statement::Statement,
         thread_safe_connection::ThreadSafeConnection,
     },
     sqlez_macros::sql,
@@ -21,6 +19,7 @@ use fs::Fs;
 use futures::{FutureExt, future::Shared};
 use gpui::{AppContext as _, Entity, Global, Subscription, Task};
 use project::AgentId;
+pub use project::WorktreePaths;
 use remote::RemoteConnectionOptions;
 use ui::{App, Context, SharedString};
 use util::ResultExt as _;
@@ -28,12 +27,36 @@ use workspace::{PathList, SerializedWorkspaceLocation, WorkspaceDb};
 
 use crate::DEFAULT_THREAD_TITLE;
 
+#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, serde::Serialize, serde::Deserialize)]
+pub struct ThreadId(uuid::Uuid);
+
+impl ThreadId {
+    pub fn new() -> Self {
+        Self(uuid::Uuid::new_v4())
+    }
+}
+
+impl Bind for ThreadId {
+    fn bind(&self, statement: &Statement, start_index: i32) -> anyhow::Result<i32> {
+        self.0.bind(statement, start_index)
+    }
+}
+
+impl Column for ThreadId {
+    fn column(statement: &mut Statement, start_index: i32) -> anyhow::Result<(Self, i32)> {
+        let (uuid, next) = Column::column(statement, start_index)?;
+        Ok((ThreadId(uuid), next))
+    }
+}
+
 const THREAD_REMOTE_CONNECTION_MIGRATION_KEY: &str = "thread-metadata-remote-connection-backfill";
+const THREAD_ID_MIGRATION_KEY: &str = "thread-metadata-thread-id-backfill";
 
 pub fn init(cx: &mut App) {
     ThreadMetadataStore::init_global(cx);
     let migration_task = migrate_thread_metadata(cx);
     migrate_thread_remote_connections(cx, migration_task);
+    migrate_thread_ids(cx);
 }
 
 /// Migrate existing thread metadata from native agent thread store to the new metadata storage.
@@ -45,26 +68,36 @@ fn migrate_thread_metadata(cx: &mut App) -> Task<anyhow::Result<()>> {
     let db = store.read(cx).db.clone();
 
     cx.spawn(async move |cx| {
-        let existing_entries = db.list_ids()?.into_iter().collect::<HashSet<_>>();
-
-        let is_first_migration = existing_entries.is_empty();
+        let existing_list = db.list()?;
+        let is_first_migration = existing_list.is_empty();
+        let existing_session_ids: HashSet<Arc<str>> = existing_list
+            .into_iter()
+            .filter_map(|m| m.session_id.map(|s| s.0))
+            .collect();
 
         let mut to_migrate = store.read_with(cx, |_store, cx| {
             ThreadStore::global(cx)
                 .read(cx)
                 .entries()
                 .filter_map(|entry| {
-                    if existing_entries.contains(&entry.id.0) {
+                    if existing_session_ids.contains(&entry.id.0) {
                         return None;
                     }
 
                     Some(ThreadMetadata {
-                        session_id: entry.id,
+                        thread_id: ThreadId::new(),
+                        session_id: Some(entry.id),
                         agent_id: ZED_AGENT_ID.clone(),
-                        title: entry.title,
+                        title: if entry.title.is_empty()
+                            || entry.title.as_ref() == DEFAULT_THREAD_TITLE
+                        {
+                            None
+                        } else {
+                            Some(entry.title)
+                        },
                         updated_at: entry.updated_at,
                         created_at: entry.created_at,
-                        worktree_paths: ThreadWorktreePaths::from_folder_paths(&entry.folder_paths),
+                        worktree_paths: WorktreePaths::from_folder_paths(&entry.folder_paths),
                         remote_connection: None,
                         archived: true,
                     })
@@ -191,153 +224,86 @@ fn migrate_thread_remote_connections(cx: &mut App, migration_task: Task<anyhow::
     .detach_and_log_err(cx);
 }
 
-struct GlobalThreadMetadataStore(Entity<ThreadMetadataStore>);
-impl Global for GlobalThreadMetadataStore {}
-
-/// Paired worktree paths for a thread. Each folder path has a corresponding
-/// main worktree path at the same position. The two lists are always the
-/// same length and are modified together via `add_path` / `remove_main_path`.
-///
-/// For non-linked worktrees, the main path and folder path are identical.
-/// For linked worktrees, the main path is the original repo and the folder
-/// path is the linked worktree location.
-///
-/// Internally stores two `PathList`s with matching insertion order so that
-/// `ordered_paths()` on both yields positionally-paired results.
-#[derive(Default, Debug, Clone)]
-pub struct ThreadWorktreePaths {
-    folder_paths: PathList,
-    main_worktree_paths: PathList,
-}
-
-impl PartialEq for ThreadWorktreePaths {
-    fn eq(&self, other: &Self) -> bool {
-        self.folder_paths == other.folder_paths
-            && self.main_worktree_paths == other.main_worktree_paths
-    }
-}
+fn migrate_thread_ids(cx: &mut App) {
+    let store = ThreadMetadataStore::global(cx);
+    let db = store.read(cx).db.clone();
+    let kvp = KeyValueStore::global(cx);
 
-impl ThreadWorktreePaths {
-    /// Build from a project's current state. Each visible worktree is paired
-    /// with its main repo path (resolved via git), falling back to the
-    /// worktree's own path if no git repo is found.
-    pub fn from_project(project: &project::Project, cx: &App) -> Self {
-        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = project
-            .visible_worktrees(cx)
-            .map(|worktree| {
-                let snapshot = worktree.read(cx).snapshot();
-                let folder_path = snapshot.abs_path().to_path_buf();
-                let main_path = snapshot
-                    .root_repo_common_dir()
-                    .and_then(|dir| Some(dir.parent()?.to_path_buf()))
-                    .unwrap_or_else(|| folder_path.clone());
-                (main_path, folder_path)
-            })
-            .unzip();
-        Self {
-            folder_paths: PathList::new(&folders),
-            main_worktree_paths: PathList::new(&mains),
+    cx.spawn(async move |cx| -> anyhow::Result<()> {
+        if kvp.read_kvp(THREAD_ID_MIGRATION_KEY)?.is_some() {
+            return Ok(());
         }
-    }
 
-    /// Build from two parallel `PathList`s that already share the same
-    /// insertion order. Used for deserialization from DB.
-    ///
-    /// Returns an error if the two lists have different lengths, which
-    /// indicates corrupted data from a prior migration bug.
-    pub fn from_path_lists(
-        main_worktree_paths: PathList,
-        folder_paths: PathList,
-    ) -> anyhow::Result<Self> {
-        anyhow::ensure!(
-            main_worktree_paths.paths().len() == folder_paths.paths().len(),
-            "main_worktree_paths has {} entries but folder_paths has {}",
-            main_worktree_paths.paths().len(),
-            folder_paths.paths().len(),
-        );
-        Ok(Self {
-            folder_paths,
-            main_worktree_paths,
-        })
-    }
-
-    /// Build for non-linked worktrees where main == folder for every path.
-    pub fn from_folder_paths(folder_paths: &PathList) -> Self {
-        Self {
-            folder_paths: folder_paths.clone(),
-            main_worktree_paths: folder_paths.clone(),
+        let mut reloaded = false;
+        for metadata in db.list()? {
+            db.save(metadata).await?;
+            reloaded = true;
         }
-    }
-
-    pub fn is_empty(&self) -> bool {
-        self.folder_paths.is_empty()
-    }
-
-    /// The folder paths (for workspace matching / `threads_by_paths` index).
-    pub fn folder_path_list(&self) -> &PathList {
-        &self.folder_paths
-    }
 
-    /// The main worktree paths (for group key / `threads_by_main_paths` index).
-    pub fn main_worktree_path_list(&self) -> &PathList {
-        &self.main_worktree_paths
-    }
-
-    /// Iterate the (main_worktree_path, folder_path) pairs in insertion order.
-    pub fn ordered_pairs(&self) -> impl Iterator<Item = (&PathBuf, &PathBuf)> {
-        self.main_worktree_paths
-            .ordered_paths()
-            .zip(self.folder_paths.ordered_paths())
-    }
+        let reloaded_task = reloaded
+            .then_some(store.update(cx, |store, cx| store.reload(cx)))
+            .unwrap_or(Task::ready(()).shared());
 
-    /// Add a new path pair. If the exact (main, folder) pair already exists,
-    /// this is a no-op. Rebuilds both internal `PathList`s to maintain
-    /// consistent ordering.
-    pub fn add_path(&mut self, main_path: &Path, folder_path: &Path) {
-        let already_exists = self
-            .ordered_pairs()
-            .any(|(m, f)| m.as_path() == main_path && f.as_path() == folder_path);
-        if already_exists {
-            return;
-        }
-        let (mut mains, mut folders): (Vec<PathBuf>, Vec<PathBuf>) = self
-            .ordered_pairs()
-            .map(|(m, f)| (m.clone(), f.clone()))
-            .unzip();
-        mains.push(main_path.to_path_buf());
-        folders.push(folder_path.to_path_buf());
-        self.main_worktree_paths = PathList::new(&mains);
-        self.folder_paths = PathList::new(&folders);
-    }
+        kvp.write_kvp(THREAD_ID_MIGRATION_KEY.to_string(), "1".to_string())
+            .await?;
+        reloaded_task.await;
 
-    /// Remove all pairs whose main worktree path matches the given path.
-    /// This removes the corresponding entries from both lists.
-    pub fn remove_main_path(&mut self, main_path: &Path) {
-        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = self
-            .ordered_pairs()
-            .filter(|(m, _)| m.as_path() != main_path)
-            .map(|(m, f)| (m.clone(), f.clone()))
-            .unzip();
-        self.main_worktree_paths = PathList::new(&mains);
-        self.folder_paths = PathList::new(&folders);
-    }
+        Ok(())
+    })
+    .detach_and_log_err(cx);
 }
 
+struct GlobalThreadMetadataStore(Entity<ThreadMetadataStore>);
+impl Global for GlobalThreadMetadataStore {}
+
 /// Lightweight metadata for any thread (native or ACP), enough to populate
 /// the sidebar list and route to the correct load path when clicked.
 #[derive(Debug, Clone, PartialEq)]
 pub struct ThreadMetadata {
-    pub session_id: acp::SessionId,
+    pub thread_id: ThreadId,
+    pub session_id: Option<acp::SessionId>,
     pub agent_id: AgentId,
-    pub title: SharedString,
+    pub title: Option<SharedString>,
     pub updated_at: DateTime<Utc>,
     pub created_at: Option<DateTime<Utc>>,
-    pub worktree_paths: ThreadWorktreePaths,
+    pub worktree_paths: WorktreePaths,
     pub remote_connection: Option<RemoteConnectionOptions>,
     pub archived: bool,
 }
 
 impl ThreadMetadata {
+    pub fn new_draft(
+        thread_id: ThreadId,
+        session_id: Option<acp::SessionId>,
+        agent_id: AgentId,
+        title: Option<SharedString>,
+        worktree_paths: WorktreePaths,
+        remote_connection: Option<RemoteConnectionOptions>,
+    ) -> Self {
+        let now = Utc::now();
+        Self {
+            thread_id,
+            session_id,
+            agent_id,
+            title,
+            updated_at: now,
+            created_at: Some(now),
+            worktree_paths: worktree_paths.clone(),
+            remote_connection,
+            archived: worktree_paths.is_empty(),
+        }
+    }
+
+    pub fn is_draft(&self) -> bool {
+        self.session_id.is_none()
+    }
+
+    pub fn display_title(&self) -> SharedString {
+        self.title
+            .clone()
+            .unwrap_or_else(|| crate::DEFAULT_THREAD_TITLE.into())
+    }
+
     pub fn folder_paths(&self) -> &PathList {
         self.worktree_paths.folder_path_list()
     }
@@ -348,10 +314,14 @@ impl ThreadMetadata {
 
 impl From<&ThreadMetadata> for acp_thread::AgentSessionInfo {
     fn from(meta: &ThreadMetadata) -> Self {
+        let session_id = meta
+            .session_id
+            .clone()
+            .unwrap_or_else(|| acp::SessionId::new(meta.thread_id.0.to_string()));
         Self {
-            session_id: meta.session_id.clone(),
+            session_id,
             work_dirs: Some(meta.folder_paths().clone()),
-            title: Some(meta.title.clone()),
+            title: meta.title.clone(),
             updated_at: Some(meta.updated_at),
             created_at: meta.created_at,
             meta: None,
@@ -403,34 +373,57 @@ pub struct ArchivedGitWorktree {
 
 /// The store holds all metadata needed to show threads in the sidebar/the archive.
 ///
-/// Automatically listens to AcpThread events and updates metadata if it has changed.
+/// Listens to ConversationView events and updates metadata when the root thread changes.
 pub struct ThreadMetadataStore {
     db: ThreadMetadataDb,
-    threads: HashMap<acp::SessionId, ThreadMetadata>,
-    threads_by_paths: HashMap<PathList, HashSet<acp::SessionId>>,
-    threads_by_main_paths: HashMap<PathList, HashSet<acp::SessionId>>,
+    threads: HashMap<ThreadId, ThreadMetadata>,
+    threads_by_paths: HashMap<PathList, HashSet<ThreadId>>,
+    threads_by_main_paths: HashMap<PathList, HashSet<ThreadId>>,
+    threads_by_session: HashMap<acp::SessionId, ThreadId>,
     reload_task: Option<Shared<Task<()>>>,
-    session_subscriptions: HashMap<acp::SessionId, Subscription>,
+    conversation_subscriptions: HashMap<gpui::EntityId, Subscription>,
     pending_thread_ops_tx: smol::channel::Sender<DbOperation>,
-    in_flight_archives: HashMap<acp::SessionId, (Task<()>, smol::channel::Sender<()>)>,
+    in_flight_archives: HashMap<ThreadId, (Task<()>, smol::channel::Sender<()>)>,
     _db_operations_task: Task<()>,
 }
 
 #[derive(Debug, PartialEq)]
 enum DbOperation {
     Upsert(ThreadMetadata),
-    Delete(acp::SessionId),
+    Delete(ThreadId),
 }
 
 impl DbOperation {
-    fn id(&self) -> &acp::SessionId {
+    fn id(&self) -> ThreadId {
         match self {
-            DbOperation::Upsert(thread) => &thread.session_id,
-            DbOperation::Delete(session_id) => session_id,
+            DbOperation::Upsert(thread) => thread.thread_id,
+            DbOperation::Delete(thread_id) => *thread_id,
         }
     }
 }
 
+/// Override for the test DB name used by `ThreadMetadataStore::init_global`.
+/// When set as a GPUI global, `init_global` uses this name instead of
+/// deriving one from the thread name. This prevents data from leaking
+/// across proptest cases that share a thread name.
+#[cfg(any(test, feature = "test-support"))]
+pub struct TestMetadataDbName(pub String);
+#[cfg(any(test, feature = "test-support"))]
+impl gpui::Global for TestMetadataDbName {}
+
+#[cfg(any(test, feature = "test-support"))]
+impl TestMetadataDbName {
+    pub fn global(cx: &App) -> String {
+        cx.try_global::<Self>()
+            .map(|g| g.0.clone())
+            .unwrap_or_else(|| {
+                let thread = std::thread::current();
+                let test_name = thread.name().unwrap_or("unknown_test");
+                format!("THREAD_METADATA_DB_{}", test_name)
+            })
+    }
+}
+
 impl ThreadMetadataStore {
     #[cfg(not(any(test, feature = "test-support")))]
     pub fn init_global(cx: &mut App) {
@@ -445,9 +438,7 @@ impl ThreadMetadataStore {
 
     #[cfg(any(test, feature = "test-support"))]
     pub fn init_global(cx: &mut App) {
-        let thread = std::thread::current();
-        let test_name = thread.name().unwrap_or("unknown_test");
-        let db_name = format!("THREAD_METADATA_DB_{}", test_name);
+        let db_name = TestMetadataDbName::global(cx);
         let db = smol::block_on(db::open_test_db::<ThreadMetadataDb>(&db_name));
         let thread_store = cx.new(|cx| Self::new(ThreadMetadataDb(db), cx));
         cx.set_global(GlobalThreadMetadataStore(thread_store));
@@ -467,13 +458,19 @@ impl ThreadMetadataStore {
     }
 
     /// Returns all thread IDs.
-    pub fn entry_ids(&self) -> impl Iterator<Item = acp::SessionId> + '_ {
-        self.threads.keys().cloned()
+    pub fn entry_ids(&self) -> impl Iterator<Item = ThreadId> + '_ {
+        self.threads.keys().copied()
     }
 
     /// Returns the metadata for a specific thread, if it exists.
-    pub fn entry(&self, session_id: &acp::SessionId) -> Option<&ThreadMetadata> {
-        self.threads.get(session_id)
+    pub fn entry(&self, thread_id: ThreadId) -> Option<&ThreadMetadata> {
+        self.threads.get(&thread_id)
+    }
+
+    /// Returns the metadata for a thread identified by its ACP session ID.
+    pub fn entry_by_session(&self, session_id: &acp::SessionId) -> Option<&ThreadMetadata> {
+        let thread_id = self.threads_by_session.get(session_id)?;
+        self.threads.get(thread_id)
     }
 
     /// Returns all threads.
@@ -531,19 +528,23 @@ impl ThreadMetadataStore {
                     this.threads.clear();
                     this.threads_by_paths.clear();
                     this.threads_by_main_paths.clear();
+                    this.threads_by_session.clear();
 
                     for row in rows {
+                        if let Some(sid) = &row.session_id {
+                            this.threads_by_session.insert(sid.clone(), row.thread_id);
+                        }
                         this.threads_by_paths
                             .entry(row.folder_paths().clone())
                             .or_default()
-                            .insert(row.session_id.clone());
+                            .insert(row.thread_id);
                         if !row.main_worktree_paths().is_empty() {
                             this.threads_by_main_paths
                                 .entry(row.main_worktree_paths().clone())
                                 .or_default()
-                                .insert(row.session_id.clone());
+                                .insert(row.thread_id);
                         }
-                        this.threads.insert(row.session_id.clone(), row);
+                        this.threads.insert(row.thread_id, row);
                     }
 
                     cx.notify();
@@ -573,37 +574,41 @@ impl ThreadMetadataStore {
     }
 
     fn save_internal(&mut self, metadata: ThreadMetadata) {
-        if let Some(thread) = self.threads.get(&metadata.session_id) {
+        if let Some(thread) = self.threads.get(&metadata.thread_id) {
             if thread.folder_paths() != metadata.folder_paths() {
-                if let Some(session_ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
-                    session_ids.remove(&metadata.session_id);
+                if let Some(thread_ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
+                    thread_ids.remove(&metadata.thread_id);
                 }
             }
             if thread.main_worktree_paths() != metadata.main_worktree_paths()
                 && !thread.main_worktree_paths().is_empty()
             {
-                if let Some(session_ids) = self
+                if let Some(thread_ids) = self
                     .threads_by_main_paths
                     .get_mut(thread.main_worktree_paths())
                 {
-                    session_ids.remove(&metadata.session_id);
+                    thread_ids.remove(&metadata.thread_id);
                 }
             }
         }
 
-        self.threads
-            .insert(metadata.session_id.clone(), metadata.clone());
+        if let Some(sid) = &metadata.session_id {
+            self.threads_by_session
+                .insert(sid.clone(), metadata.thread_id);
+        }
+
+        self.threads.insert(metadata.thread_id, metadata.clone());
 
         self.threads_by_paths
             .entry(metadata.folder_paths().clone())
             .or_default()
-            .insert(metadata.session_id.clone());
+            .insert(metadata.thread_id);
 
         if !metadata.main_worktree_paths().is_empty() {
             self.threads_by_main_paths
                 .entry(metadata.main_worktree_paths().clone())
                 .or_default()
-                .insert(metadata.session_id.clone());
+                .insert(metadata.thread_id);
         }
 
         self.pending_thread_ops_tx
@@ -613,44 +618,69 @@ impl ThreadMetadataStore {
 
     pub fn update_working_directories(
         &mut self,
-        session_id: &acp::SessionId,
+        thread_id: ThreadId,
         work_dirs: PathList,
         cx: &mut Context<Self>,
     ) {
-        if let Some(thread) = self.threads.get(session_id) {
+        if let Some(thread) = self.threads.get(&thread_id) {
             self.save_internal(ThreadMetadata {
-                worktree_paths: ThreadWorktreePaths::from_path_lists(
+                worktree_paths: WorktreePaths::from_path_lists(
                     thread.main_worktree_paths().clone(),
                     work_dirs.clone(),
                 )
-                .unwrap_or_else(|_| ThreadWorktreePaths::from_folder_paths(&work_dirs)),
+                .unwrap_or_else(|_| WorktreePaths::from_folder_paths(&work_dirs)),
+                ..thread.clone()
+            });
+            cx.notify();
+        }
+    }
+
+    pub fn update_worktree_paths(
+        &mut self,
+        thread_ids: &[ThreadId],
+        worktree_paths: WorktreePaths,
+        cx: &mut Context<Self>,
+    ) {
+        let mut changed = false;
+        for &thread_id in thread_ids {
+            let Some(thread) = self.threads.get(&thread_id) else {
+                continue;
+            };
+            if thread.worktree_paths == worktree_paths {
+                continue;
+            }
+            self.save_internal(ThreadMetadata {
+                worktree_paths: worktree_paths.clone(),
                 ..thread.clone()
             });
+            changed = true;
+        }
+        if changed {
             cx.notify();
         }
     }
 
     pub fn archive(
         &mut self,
-        session_id: &acp::SessionId,
+        thread_id: ThreadId,
         archive_job: Option<(Task<()>, smol::channel::Sender<()>)>,
         cx: &mut Context<Self>,
     ) {
-        self.update_archived(session_id, true, cx);
+        self.update_archived(thread_id, true, cx);
 
         if let Some(job) = archive_job {
-            self.in_flight_archives.insert(session_id.clone(), job);
+            self.in_flight_archives.insert(thread_id, job);
         }
     }
 
-    pub fn unarchive(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
-        self.update_archived(session_id, false, cx);
+    pub fn unarchive(&mut self, thread_id: ThreadId, cx: &mut Context<Self>) {
+        self.update_archived(thread_id, false, cx);
         // Dropping the Sender triggers cancellation in the background task.
-        self.in_flight_archives.remove(session_id);
+        self.in_flight_archives.remove(&thread_id);
     }
 
-    pub fn cleanup_completed_archive(&mut self, session_id: &acp::SessionId) {
-        self.in_flight_archives.remove(session_id);
+    pub fn cleanup_completed_archive(&mut self, thread_id: ThreadId) {
+        self.in_flight_archives.remove(&thread_id);
     }
 
     /// Updates a thread's `folder_paths` after an archived worktree has been
@@ -659,11 +689,11 @@ impl ThreadMetadataStore {
     /// `path_replacements` is applied to the thread's stored folder paths.
     pub fn update_restored_worktree_paths(
         &mut self,
-        session_id: &acp::SessionId,
+        thread_id: ThreadId,
         path_replacements: &[(PathBuf, PathBuf)],
         cx: &mut Context<Self>,
     ) {
-        if let Some(thread) = self.threads.get(session_id).cloned() {
+        if let Some(thread) = self.threads.get(&thread_id).cloned() {
             let mut paths: Vec<PathBuf> = thread.folder_paths().paths().to_vec();
             for (old_path, new_path) in path_replacements {
                 if let Some(pos) = paths.iter().position(|p| p == old_path) {
@@ -672,11 +702,11 @@ impl ThreadMetadataStore {
             }
             let new_folder_paths = PathList::new(&paths);
             self.save_internal(ThreadMetadata {
-                worktree_paths: ThreadWorktreePaths::from_path_lists(
+                worktree_paths: WorktreePaths::from_path_lists(
                     thread.main_worktree_paths().clone(),
                     new_folder_paths.clone(),
                 )
-                .unwrap_or_else(|_| ThreadWorktreePaths::from_folder_paths(&new_folder_paths)),
+                .unwrap_or_else(|_| WorktreePaths::from_folder_paths(&new_folder_paths)),
                 ..thread
             });
             cx.notify();
@@ -685,11 +715,11 @@ impl ThreadMetadataStore {
 
     pub fn complete_worktree_restore(
         &mut self,
-        session_id: &acp::SessionId,
+        thread_id: ThreadId,
         path_replacements: &[(PathBuf, PathBuf)],
         cx: &mut Context<Self>,
     ) {
-        if let Some(thread) = self.threads.get(session_id).cloned() {
+        if let Some(thread) = self.threads.get(&thread_id).cloned() {
             let mut paths: Vec<PathBuf> = thread.folder_paths().paths().to_vec();
             for (old_path, new_path) in path_replacements {
                 for path in &mut paths {
@@ -700,11 +730,11 @@ impl ThreadMetadataStore {
             }
             let new_folder_paths = PathList::new(&paths);
             self.save_internal(ThreadMetadata {
-                worktree_paths: ThreadWorktreePaths::from_path_lists(
+                worktree_paths: WorktreePaths::from_path_lists(
                     thread.main_worktree_paths().clone(),
                     new_folder_paths.clone(),
                 )
-                .unwrap_or_else(|_| ThreadWorktreePaths::from_folder_paths(&new_folder_paths)),
+                .unwrap_or_else(|_| WorktreePaths::from_folder_paths(&new_folder_paths)),
                 ..thread
             });
             cx.notify();
@@ -712,35 +742,87 @@ impl ThreadMetadataStore {
     }
 
     /// Apply a mutation to the worktree paths of all threads whose current
-    /// `main_worktree_paths` matches `current_main_paths`, then re-index.
+    /// `folder_paths` matches `current_folder_paths`, then re-index.
+    /// When `remote_connection` is provided, only threads with a matching
+    /// remote connection are affected.
     pub fn change_worktree_paths(
+        &mut self,
+        current_folder_paths: &PathList,
+        remote_connection: Option<&RemoteConnectionOptions>,
+        mutate: impl Fn(&mut WorktreePaths),
+        cx: &mut Context<Self>,
+    ) {
+        let thread_ids: Vec<_> = self
+            .threads_by_paths
+            .get(current_folder_paths)
+            .into_iter()
+            .flatten()
+            .filter(|id| {
+                remote_connection.is_none()
+                    || self
+                        .threads
+                        .get(id)
+                        .and_then(|t| t.remote_connection.as_ref())
+                        == remote_connection
+            })
+            .copied()
+            .collect();
+
+        self.mutate_thread_paths(&thread_ids, mutate, cx);
+    }
+
+    /// Like `change_worktree_paths`, but looks up threads by their
+    /// `main_worktree_paths` instead of `folder_paths`. Used when
+    /// migrating threads for project group key changes where the
+    /// lookup key is the group key's main paths.
+    /// When `remote_connection` is provided, only threads with a matching
+    /// remote connection are affected.
+    pub fn change_worktree_paths_by_main(
         &mut self,
         current_main_paths: &PathList,
-        mutate: impl Fn(&mut ThreadWorktreePaths),
+        remote_connection: Option<&RemoteConnectionOptions>,
+        mutate: impl Fn(&mut WorktreePaths),
         cx: &mut Context<Self>,
     ) {
-        let session_ids: Vec<_> = self
+        let thread_ids: Vec<_> = self
             .threads_by_main_paths
             .get(current_main_paths)
             .into_iter()
             .flatten()
-            .cloned()
+            .filter(|id| {
+                remote_connection.is_none()
+                    || self
+                        .threads
+                        .get(id)
+                        .and_then(|t| t.remote_connection.as_ref())
+                        == remote_connection
+            })
+            .copied()
             .collect();
 
-        if session_ids.is_empty() {
+        self.mutate_thread_paths(&thread_ids, mutate, cx);
+    }
+
+    fn mutate_thread_paths(
+        &mut self,
+        thread_ids: &[ThreadId],
+        mutate: impl Fn(&mut WorktreePaths),
+        cx: &mut Context<Self>,
+    ) {
+        if thread_ids.is_empty() {
             return;
         }
 
-        for session_id in &session_ids {
-            if let Some(thread) = self.threads.get_mut(session_id) {
+        for thread_id in thread_ids {
+            if let Some(thread) = self.threads.get_mut(thread_id) {
                 if let Some(ids) = self
                     .threads_by_main_paths
                     .get_mut(thread.main_worktree_paths())
                 {
-                    ids.remove(session_id);
+                    ids.remove(thread_id);
                 }
                 if let Some(ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
-                    ids.remove(session_id);
+                    ids.remove(thread_id);
                 }
 
                 mutate(&mut thread.worktree_paths);
@@ -748,11 +830,11 @@ impl ThreadMetadataStore {
                 self.threads_by_main_paths
                     .entry(thread.main_worktree_paths().clone())
                     .or_default()
-                    .insert(session_id.clone());
+                    .insert(*thread_id);
                 self.threads_by_paths
                     .entry(thread.folder_paths().clone())
                     .or_default()
-                    .insert(session_id.clone());
+                    .insert(*thread_id);
 
                 self.pending_thread_ops_tx
                     .try_send(DbOperation::Upsert(thread.clone()))
@@ -789,24 +871,24 @@ impl ThreadMetadataStore {
 
     pub fn link_thread_to_archived_worktree(
         &self,
-        session_id: String,
+        thread_id: ThreadId,
         archived_worktree_id: i64,
         cx: &App,
     ) -> Task<anyhow::Result<()>> {
         let db = self.db.clone();
         cx.background_spawn(async move {
-            db.link_thread_to_archived_worktree(session_id, archived_worktree_id)
+            db.link_thread_to_archived_worktree(thread_id, archived_worktree_id)
                 .await
         })
     }
 
     pub fn get_archived_worktrees_for_thread(
         &self,
-        session_id: String,
+        thread_id: ThreadId,
         cx: &App,
     ) -> Task<anyhow::Result<Vec<ArchivedGitWorktree>>> {
         let db = self.db.clone();
-        cx.background_spawn(async move { db.get_archived_worktrees_for_thread(session_id).await })
+        cx.background_spawn(async move { db.get_archived_worktrees_for_thread(thread_id).await })
     }
 
     pub fn delete_archived_worktree(&self, id: i64, cx: &App) -> Task<anyhow::Result<()>> {
@@ -816,12 +898,12 @@ impl ThreadMetadataStore {
 
     pub fn unlink_thread_from_all_archived_worktrees(
         &self,
-        session_id: String,
+        thread_id: ThreadId,
         cx: &App,
     ) -> Task<anyhow::Result<()>> {
         let db = self.db.clone();
         cx.background_spawn(async move {
-            db.unlink_thread_from_all_archived_worktrees(session_id)
+            db.unlink_thread_from_all_archived_worktrees(thread_id)
                 .await
         })
     }
@@ -838,13 +920,8 @@ impl ThreadMetadataStore {
         })
     }
 
-    fn update_archived(
-        &mut self,
-        session_id: &acp::SessionId,
-        archived: bool,
-        cx: &mut Context<Self>,
-    ) {
-        if let Some(thread) = self.threads.get(session_id) {
+    fn update_archived(&mut self, thread_id: ThreadId, archived: bool, cx: &mut Context<Self>) {
+        if let Some(thread) = self.threads.get(&thread_id) {
             self.save_internal(ThreadMetadata {
                 archived,
                 ..thread.clone()
@@ -853,23 +930,26 @@ impl ThreadMetadataStore {
         }
     }
 
-    pub fn delete(&mut self, session_id: acp::SessionId, cx: &mut Context<Self>) {
-        if let Some(thread) = self.threads.get(&session_id) {
-            if let Some(session_ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
-                session_ids.remove(&session_id);
+    pub fn delete(&mut self, thread_id: ThreadId, cx: &mut Context<Self>) {
+        if let Some(thread) = self.threads.get(&thread_id) {
+            if let Some(sid) = &thread.session_id {
+                self.threads_by_session.remove(sid);
+            }
+            if let Some(thread_ids) = self.threads_by_paths.get_mut(thread.folder_paths()) {
+                thread_ids.remove(&thread_id);
             }
             if !thread.main_worktree_paths().is_empty() {
-                if let Some(session_ids) = self
+                if let Some(thread_ids) = self
                     .threads_by_main_paths
                     .get_mut(thread.main_worktree_paths())
                 {
-                    session_ids.remove(&session_id);
+                    thread_ids.remove(&thread_id);
                 }
             }
         }
-        self.threads.remove(&session_id);
+        self.threads.remove(&thread_id);
         self.pending_thread_ops_tx
-            .try_send(DbOperation::Delete(session_id))
+            .try_send(DbOperation::Delete(thread_id))
             .log_err();
         cx.notify();
     }
@@ -877,21 +957,16 @@ impl ThreadMetadataStore {
     fn new(db: ThreadMetadataDb, cx: &mut Context<Self>) -> Self {
         let weak_store = cx.weak_entity();
 
-        cx.observe_new::<acp_thread::AcpThread>(move |thread, _window, cx| {
-            // Don't track subagent threads in the sidebar.
-            if thread.parent_session_id().is_some() {
-                return;
-            }
-
-            let thread_entity = cx.entity();
+        cx.observe_new::<crate::ConversationView>(move |_view, _window, cx| {
+            let view_entity = cx.entity();
+            let entity_id = view_entity.entity_id();
 
             cx.on_release({
                 let weak_store = weak_store.clone();
-                move |thread, cx| {
+                move |_view, cx| {
                     weak_store
                         .update(cx, |store, _cx| {
-                            let session_id = thread.session_id().clone();
-                            store.session_subscriptions.remove(&session_id);
+                            store.conversation_subscriptions.remove(&entity_id);
                         })
                         .ok();
                 }
@@ -900,9 +975,9 @@ impl ThreadMetadataStore {
 
             weak_store
                 .update(cx, |this, cx| {
-                    let subscription = cx.subscribe(&thread_entity, Self::handle_thread_event);
-                    this.session_subscriptions
-                        .insert(thread.session_id().clone(), subscription);
+                    let subscription = cx.subscribe(&view_entity, Self::handle_conversation_event);
+                    this.conversation_subscriptions
+                        .insert(entity_id, subscription);
                 })
                 .ok();
         })
@@ -923,8 +998,8 @@ impl ThreadMetadataStore {
                             DbOperation::Upsert(metadata) => {
                                 db.save(metadata).await.log_err();
                             }
-                            DbOperation::Delete(session_id) => {
-                                db.delete(session_id).await.log_err();
+                            DbOperation::Delete(thread_id) => {
+                                db.delete(thread_id).await.log_err();
                             }
                         }
                     }
@@ -937,8 +1012,9 @@ impl ThreadMetadataStore {
             threads: HashMap::default(),
             threads_by_paths: HashMap::default(),
             threads_by_main_paths: HashMap::default(),
+            threads_by_session: HashMap::default(),
             reload_task: None,
-            session_subscriptions: HashMap::default(),
+            conversation_subscriptions: HashMap::default(),
             pending_thread_ops_tx: tx,
             in_flight_archives: HashMap::default(),
             _db_operations_task,
@@ -950,91 +1026,69 @@ impl ThreadMetadataStore {
     fn dedup_db_operations(operations: Vec<DbOperation>) -> Vec<DbOperation> {
         let mut ops = HashMap::default();
         for operation in operations.into_iter().rev() {
-            if ops.contains_key(operation.id()) {
+            if ops.contains_key(&operation.id()) {
                 continue;
             }
-            ops.insert(operation.id().clone(), operation);
+            ops.insert(operation.id(), operation);
         }
         ops.into_values().collect()
     }
 
-    fn handle_thread_event(
+    fn handle_conversation_event(
         &mut self,
-        thread: Entity<acp_thread::AcpThread>,
-        event: &AcpThreadEvent,
+        conversation_view: Entity<crate::ConversationView>,
+        _event: &crate::conversation_view::RootThreadUpdated,
         cx: &mut Context<Self>,
     ) {
-        // Don't track subagent threads in the sidebar.
-        if thread.read(cx).parent_session_id().is_some() {
+        let view = conversation_view.read(cx);
+        let thread_id = view.thread_id;
+        let Some(thread) = view.root_acp_thread(cx) else {
+            return;
+        };
+
+        let thread_ref = thread.read(cx);
+        if thread_ref.entries().is_empty() {
             return;
         }
 
-        match event {
-            AcpThreadEvent::NewEntry
-            | AcpThreadEvent::TitleUpdated
-            | AcpThreadEvent::EntryUpdated(_)
-            | AcpThreadEvent::EntriesRemoved(_)
-            | AcpThreadEvent::ToolAuthorizationRequested(_)
-            | AcpThreadEvent::ToolAuthorizationReceived(_)
-            | AcpThreadEvent::Retry(_)
-            | AcpThreadEvent::Stopped(_)
-            | AcpThreadEvent::Error
-            | AcpThreadEvent::LoadError(_)
-            | AcpThreadEvent::Refusal
-            | AcpThreadEvent::WorkingDirectoriesUpdated => {
-                let thread_ref = thread.read(cx);
-                if thread_ref.entries().is_empty() {
-                    return;
-                }
+        let existing_thread = self.entry(thread_id);
+        let session_id = Some(thread_ref.session_id().clone());
+        let title = thread_ref.title();
 
-                let existing_thread = self.threads.get(thread_ref.session_id());
-                let session_id = thread_ref.session_id().clone();
-                let title = thread_ref
-                    .title()
-                    .unwrap_or_else(|| DEFAULT_THREAD_TITLE.into());
-
-                let updated_at = Utc::now();
-
-                let created_at = existing_thread
-                    .and_then(|t| t.created_at)
-                    .unwrap_or_else(|| updated_at);
-
-                let agent_id = thread_ref.connection().agent_id();
-
-                let project = thread_ref.project().read(cx);
-                let worktree_paths = ThreadWorktreePaths::from_project(project, cx);
-
-                let project_group_key = project.project_group_key(cx);
-                let remote_connection = project_group_key.host();
-
-                // Threads without a folder path (e.g. started in an empty
-                // window) are archived by default so they don't get lost,
-                // because they won't show up in the sidebar. Users can reload
-                // them from the archive.
-                let archived = existing_thread
-                    .map(|t| t.archived)
-                    .unwrap_or(worktree_paths.is_empty());
-
-                let metadata = ThreadMetadata {
-                    session_id,
-                    agent_id,
-                    title,
-                    created_at: Some(created_at),
-                    updated_at,
-                    worktree_paths,
-                    remote_connection,
-                    archived,
-                };
+        let updated_at = Utc::now();
 
-                self.save(metadata, cx);
-            }
-            AcpThreadEvent::TokenUsageUpdated
-            | AcpThreadEvent::SubagentSpawned(_)
-            | AcpThreadEvent::PromptCapabilitiesUpdated
-            | AcpThreadEvent::AvailableCommandsUpdated(_)
-            | AcpThreadEvent::ModeUpdated(_)
-            | AcpThreadEvent::ConfigOptionsUpdated(_) => {}
-        }
+        let created_at = existing_thread
+            .and_then(|t| t.created_at)
+            .unwrap_or_else(|| updated_at);
+
+        let agent_id = thread_ref.connection().agent_id();
+
+        let project = thread_ref.project().read(cx);
+        let worktree_paths = project.worktree_paths(cx);
+
+        let remote_connection = project.remote_connection_options(cx);
+
+        // Threads without a folder path (e.g. started in an empty
+        // window) are archived by default so they don't get lost,
+        // because they won't show up in the sidebar. Users can reload
+        // them from the archive.
+        let archived = existing_thread
+            .map(|t| t.archived)
+            .unwrap_or(worktree_paths.is_empty());
+
+        let metadata = ThreadMetadata {
+            thread_id,
+            session_id,
+            agent_id,
+            title,
+            created_at: Some(created_at),
+            updated_at,
+            worktree_paths,
+            remote_connection,
+            archived,
+        };
+
+        self.save(metadata, cx);
     }
 }
 

crates/agent_ui/src/thread_worktree_archive.rs 🔗

@@ -3,7 +3,6 @@ use std::{
     sync::Arc,
 };
 
-use agent_client_protocol as acp;
 use anyhow::{Context as _, Result, anyhow};
 use gpui::{App, AsyncApp, Entity, Task};
 use project::{
@@ -13,7 +12,7 @@ use project::{
 use util::ResultExt;
 use workspace::{AppState, MultiWorkspace, Workspace};
 
-use crate::thread_metadata_store::{ArchivedGitWorktree, ThreadMetadataStore};
+use crate::thread_metadata_store::{ArchivedGitWorktree, ThreadId, ThreadMetadataStore};
 
 /// The plan for archiving a single git worktree root.
 ///
@@ -170,14 +169,14 @@ pub fn build_root_plan(
 /// references `path` in its folder paths. Used to determine whether a
 /// worktree can safely be removed from disk.
 pub fn path_is_referenced_by_other_unarchived_threads(
-    current_session_id: &acp::SessionId,
+    current_thread_id: ThreadId,
     path: &Path,
     cx: &App,
 ) -> bool {
     ThreadMetadataStore::global(cx)
         .read(cx)
         .entries()
-        .filter(|thread| thread.session_id != *current_session_id)
+        .filter(|thread| thread.thread_id != current_thread_id)
         .filter(|thread| !thread.archived)
         .any(|thread| {
             thread
@@ -412,7 +411,7 @@ pub async fn persist_worktree_state(root: &RootPlan, cx: &mut AsyncApp) -> Resul
     };
 
     // Link all threads on this worktree to the archived record
-    let session_ids: Vec<acp::SessionId> = store.read_with(cx, |store, _cx| {
+    let thread_ids: Vec<ThreadId> = store.read_with(cx, |store, _cx| {
         store
             .entries()
             .filter(|thread| {
@@ -422,18 +421,14 @@ pub async fn persist_worktree_state(root: &RootPlan, cx: &mut AsyncApp) -> Resul
                     .iter()
                     .any(|p| p.as_path() == root.root_path)
             })
-            .map(|thread| thread.session_id.clone())
+            .map(|thread| thread.thread_id)
             .collect()
     });
 
-    for session_id in &session_ids {
+    for thread_id in &thread_ids {
         let link_result = store
             .read_with(cx, |store, cx| {
-                store.link_thread_to_archived_worktree(
-                    session_id.0.to_string(),
-                    archived_worktree_id,
-                    cx,
-                )
+                store.link_thread_to_archived_worktree(*thread_id, archived_worktree_id, cx)
             })
             .await;
         if let Err(error) = link_result {
@@ -636,21 +631,18 @@ pub async fn cleanup_archived_worktree_record(row: &ArchivedGitWorktree, cx: &mu
 /// This unlinks the thread from all its archived worktrees and, for any
 /// archived worktree that is no longer referenced by any other thread,
 /// deletes the git ref and DB records.
-pub async fn cleanup_thread_archived_worktrees(session_id: &acp::SessionId, cx: &mut AsyncApp) {
+pub async fn cleanup_thread_archived_worktrees(thread_id: ThreadId, cx: &mut AsyncApp) {
     let store = cx.update(|cx| ThreadMetadataStore::global(cx));
 
     let archived_worktrees = store
         .read_with(cx, |store, cx| {
-            store.get_archived_worktrees_for_thread(session_id.0.to_string(), cx)
+            store.get_archived_worktrees_for_thread(thread_id, cx)
         })
         .await;
     let archived_worktrees = match archived_worktrees {
         Ok(rows) => rows,
         Err(error) => {
-            log::error!(
-                "Failed to fetch archived worktrees for thread {}: {error:#}",
-                session_id.0
-            );
+            log::error!("Failed to fetch archived worktrees for thread {thread_id:?}: {error:#}");
             return;
         }
     };
@@ -661,14 +653,11 @@ pub async fn cleanup_thread_archived_worktrees(session_id: &acp::SessionId, cx:
 
     if let Err(error) = store
         .read_with(cx, |store, cx| {
-            store.unlink_thread_from_all_archived_worktrees(session_id.0.to_string(), cx)
+            store.unlink_thread_from_all_archived_worktrees(thread_id, cx)
         })
         .await
     {
-        log::error!(
-            "Failed to unlink thread {} from archived worktrees: {error:#}",
-            session_id.0
-        );
+        log::error!("Failed to unlink thread {thread_id:?} from archived worktrees: {error:#}");
         return;
     }
 

crates/agent_ui/src/threads_archive_view.rs 🔗

@@ -3,8 +3,8 @@ use std::sync::Arc;
 
 use crate::agent_connection_store::AgentConnectionStore;
 
-use crate::thread_metadata_store::{ThreadMetadata, ThreadMetadataStore};
-use crate::{Agent, RemoveSelectedThread};
+use crate::thread_metadata_store::{ThreadId, ThreadMetadata, ThreadMetadataStore};
+use crate::{Agent, DEFAULT_THREAD_TITLE, RemoveSelectedThread};
 
 use agent::ThreadStore;
 use agent_client_protocol as acp;
@@ -113,7 +113,7 @@ fn fuzzy_match_positions(query: &str, text: &str) -> Option<Vec<usize>> {
 pub enum ThreadsArchiveViewEvent {
     Close,
     Unarchive { thread: ThreadMetadata },
-    CancelRestore { session_id: acp::SessionId },
+    CancelRestore { thread_id: ThreadId },
 }
 
 impl EventEmitter<ThreadsArchiveViewEvent> for ThreadsArchiveView {}
@@ -132,7 +132,7 @@ pub struct ThreadsArchiveView {
     workspace: WeakEntity<Workspace>,
     agent_connection_store: WeakEntity<AgentConnectionStore>,
     agent_server_store: WeakEntity<AgentServerStore>,
-    restoring: HashSet<acp::SessionId>,
+    restoring: HashSet<ThreadId>,
 }
 
 impl ThreadsArchiveView {
@@ -216,13 +216,13 @@ impl ThreadsArchiveView {
         self.selection = None;
     }
 
-    pub fn mark_restoring(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
-        self.restoring.insert(session_id.clone());
+    pub fn mark_restoring(&mut self, thread_id: &ThreadId, cx: &mut Context<Self>) {
+        self.restoring.insert(*thread_id);
         cx.notify();
     }
 
-    pub fn clear_restoring(&mut self, session_id: &acp::SessionId, cx: &mut Context<Self>) {
-        self.restoring.remove(session_id);
+    pub fn clear_restoring(&mut self, thread_id: &ThreadId, cx: &mut Context<Self>) {
+        self.restoring.remove(thread_id);
         cx.notify();
     }
 
@@ -255,7 +255,14 @@ impl ThreadsArchiveView {
 
         for session in sessions {
             let highlight_positions = if !query.is_empty() {
-                match fuzzy_match_positions(&query, &session.title) {
+                match fuzzy_match_positions(
+                    &query,
+                    session
+                        .title
+                        .as_ref()
+                        .map(|t| t.as_ref())
+                        .unwrap_or(DEFAULT_THREAD_TITLE),
+                ) {
                     Some(positions) => positions,
                     None => continue,
                 }
@@ -336,7 +343,7 @@ impl ThreadsArchiveView {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        if self.restoring.contains(&thread.session_id) {
+        if self.restoring.contains(&thread.thread_id) {
             return;
         }
 
@@ -345,7 +352,7 @@ impl ThreadsArchiveView {
             return;
         }
 
-        self.mark_restoring(&thread.session_id, cx);
+        self.mark_restoring(&thread.thread_id, cx);
         self.selection = None;
         self.reset_filter_editor_text(window, cx);
         cx.emit(ThreadsArchiveViewEvent::Unarchive { thread });
@@ -528,9 +535,9 @@ impl ThreadsArchiveView {
                     IconName::Sparkle
                 };
 
-                let is_restoring = self.restoring.contains(&thread.session_id);
+                let is_restoring = self.restoring.contains(&thread.thread_id);
 
-                let base = ThreadItem::new(id, thread.title.clone())
+                let base = ThreadItem::new(id, thread.display_title())
                     .icon(icon)
                     .when_some(icon_from_external_svg, |this, svg| {
                         this.custom_icon_from_external_svg(svg)
@@ -557,11 +564,11 @@ impl ThreadsArchiveView {
                                 .icon_color(Color::Muted)
                                 .tooltip(Tooltip::text("Cancel Restore"))
                                 .on_click({
-                                    let session_id = thread.session_id.clone();
+                                    let thread_id = thread.thread_id;
                                     cx.listener(move |this, _, _, cx| {
-                                        this.clear_restoring(&session_id, cx);
+                                        this.clear_restoring(&thread_id, cx);
                                         cx.emit(ThreadsArchiveViewEvent::CancelRestore {
-                                            session_id: session_id.clone(),
+                                            thread_id,
                                         });
                                         cx.stop_propagation();
                                     })
@@ -586,10 +593,16 @@ impl ThreadsArchiveView {
                             })
                             .on_click({
                                 let agent = thread.agent_id.clone();
+                                let thread_id = thread.thread_id;
                                 let session_id = thread.session_id.clone();
                                 cx.listener(move |this, _, _, cx| {
                                     this.preserve_selection_on_next_update = true;
-                                    this.delete_thread(session_id.clone(), agent.clone(), cx);
+                                    this.delete_thread(
+                                        thread_id,
+                                        session_id.clone(),
+                                        agent.clone(),
+                                        cx,
+                                    );
                                     cx.stop_propagation();
                                 })
                             }),
@@ -619,17 +632,22 @@ impl ThreadsArchiveView {
         };
 
         self.preserve_selection_on_next_update = true;
-        self.delete_thread(thread.session_id.clone(), thread.agent_id.clone(), cx);
+        self.delete_thread(
+            thread.thread_id,
+            thread.session_id.clone(),
+            thread.agent_id.clone(),
+            cx,
+        );
     }
 
     fn delete_thread(
         &mut self,
-        session_id: acp::SessionId,
+        thread_id: ThreadId,
+        session_id: Option<acp::SessionId>,
         agent: AgentId,
         cx: &mut Context<Self>,
     ) {
-        ThreadMetadataStore::global(cx)
-            .update(cx, |store, cx| store.delete(session_id.clone(), cx));
+        ThreadMetadataStore::global(cx).update(cx, |store, cx| store.delete(thread_id, cx));
 
         let agent = Agent::from(agent);
 
@@ -645,13 +663,16 @@ impl ThreadsArchiveView {
                 .wait_for_connection()
         });
         cx.spawn(async move |_this, cx| {
-            crate::thread_worktree_archive::cleanup_thread_archived_worktrees(&session_id, cx)
-                .await;
+            crate::thread_worktree_archive::cleanup_thread_archived_worktrees(thread_id, cx).await;
 
             let state = task.await?;
             let task = cx.update(|cx| {
-                if let Some(list) = state.connection.session_list(cx) {
-                    list.delete_session(&session_id, cx)
+                if let Some(session_id) = &session_id {
+                    if let Some(list) = state.connection.session_list(cx) {
+                        list.delete_session(session_id, cx)
+                    } else {
+                        Task::ready(Ok(()))
+                    }
                 } else {
                     Task::ready(Ok(()))
                 }
@@ -929,9 +950,9 @@ impl ProjectPickerDelegate {
         cx: &mut Context<Picker<Self>>,
     ) {
         self.thread.worktree_paths =
-            super::thread_metadata_store::ThreadWorktreePaths::from_folder_paths(&paths);
+            super::thread_metadata_store::WorktreePaths::from_folder_paths(&paths);
         ThreadMetadataStore::global(cx).update(cx, |store, cx| {
-            store.update_working_directories(&self.thread.session_id, paths, cx);
+            store.update_working_directories(self.thread.thread_id, paths, cx);
         });
 
         self.archive_view
@@ -995,7 +1016,15 @@ impl PickerDelegate for ProjectPickerDelegate {
     type ListItem = AnyElement;
 
     fn placeholder_text(&self, _window: &mut Window, _cx: &mut App) -> Arc<str> {
-        format!("Associate the \"{}\" thread with...", self.thread.title).into()
+        format!(
+            "Associate the \"{}\" thread with...",
+            self.thread
+                .title
+                .as_ref()
+                .map(|t| t.as_ref())
+                .unwrap_or(DEFAULT_THREAD_TITLE)
+        )
+        .into()
     }
 
     fn render_editor(

crates/edit_prediction/src/license_detection.rs 🔗

@@ -319,7 +319,7 @@ impl LicenseDetectionWatcher {
                 }
                 worktree::Event::DeletedEntry(_)
                 | worktree::Event::UpdatedGitRepositories(_)
-                | worktree::Event::UpdatedRootRepoCommonDir
+                | worktree::Event::UpdatedRootRepoCommonDir { .. }
                 | worktree::Event::Deleted => {}
             });
 

crates/project/src/lsp_store.rs 🔗

@@ -4416,7 +4416,7 @@ impl LspStore {
                     worktree::Event::UpdatedGitRepositories(_)
                     | worktree::Event::DeletedEntry(_)
                     | worktree::Event::Deleted
-                    | worktree::Event::UpdatedRootRepoCommonDir => {}
+                    | worktree::Event::UpdatedRootRepoCommonDir { .. } => {}
                 })
                 .detach()
             }

crates/project/src/manifest_tree.rs 🔗

@@ -59,7 +59,7 @@ impl WorktreeRoots {
                         let path = TriePath::from(entry.path.as_ref());
                         this.roots.remove(&path);
                     }
-                    WorktreeEvent::Deleted | WorktreeEvent::UpdatedRootRepoCommonDir => {}
+                    WorktreeEvent::Deleted | WorktreeEvent::UpdatedRootRepoCommonDir { .. } => {}
                 }
             }),
         })

crates/project/src/project.rs 🔗

@@ -51,6 +51,7 @@ pub use git_store::{
 };
 pub use manifest_tree::ManifestTree;
 pub use project_search::{Search, SearchResults};
+pub use worktree_store::WorktreePaths;
 
 use anyhow::{Context as _, Result, anyhow};
 use buffer_store::{BufferStore, BufferStoreEvent};
@@ -246,6 +247,7 @@ pub struct Project {
     toolchain_store: Option<Entity<ToolchainStore>>,
     agent_location: Option<AgentLocation>,
     downloading_files: Arc<Mutex<HashMap<(WorktreeId, String), DownloadingFile>>>,
+    last_worktree_paths: WorktreePaths,
 }
 
 struct DownloadingFile {
@@ -361,6 +363,9 @@ pub enum Event {
     WorktreeRemoved(WorktreeId),
     WorktreeUpdatedEntries(WorktreeId, UpdatedEntriesSet),
     WorktreeUpdatedRootRepoCommonDir(WorktreeId),
+    WorktreePathsChanged {
+        old_worktree_paths: WorktreePaths,
+    },
     DiskBasedDiagnosticsStarted {
         language_server_id: LanguageServerId,
     },
@@ -1339,6 +1344,7 @@ impl Project {
 
                 agent_location: None,
                 downloading_files: Default::default(),
+                last_worktree_paths: WorktreePaths::default(),
             }
         })
     }
@@ -1576,6 +1582,7 @@ impl Project {
                 toolchain_store: Some(toolchain_store),
                 agent_location: None,
                 downloading_files: Default::default(),
+                last_worktree_paths: WorktreePaths::default(),
             };
 
             // remote server -> local machine handlers
@@ -1857,6 +1864,7 @@ impl Project {
                 toolchain_store: None,
                 agent_location: None,
                 downloading_files: Default::default(),
+                last_worktree_paths: WorktreePaths::default(),
             };
             project.set_role(role, cx);
             for worktree in worktrees {
@@ -2067,6 +2075,40 @@ impl Project {
         project
     }
 
+    #[cfg(any(test, feature = "test-support"))]
+    pub fn add_test_remote_worktree(
+        &mut self,
+        abs_path: &str,
+        cx: &mut Context<Self>,
+    ) -> Entity<Worktree> {
+        use rpc::NoopProtoClient;
+        use util::paths::PathStyle;
+
+        let root_name = std::path::Path::new(abs_path)
+            .file_name()
+            .map(|n| n.to_string_lossy().to_string())
+            .unwrap_or_default();
+
+        let client = AnyProtoClient::new(NoopProtoClient::new());
+        let worktree = Worktree::remote(
+            0,
+            ReplicaId::new(1),
+            proto::WorktreeMetadata {
+                id: 100 + self.visible_worktrees(cx).count() as u64,
+                root_name,
+                visible: true,
+                abs_path: abs_path.to_string(),
+                root_repo_common_dir: None,
+            },
+            client,
+            PathStyle::Posix,
+            cx,
+        );
+        self.worktree_store
+            .update(cx, |store, cx| store.add(&worktree, cx));
+        worktree
+    }
+
     #[inline]
     pub fn dap_store(&self) -> Entity<DapStore> {
         self.dap_store.clone()
@@ -2351,20 +2393,13 @@ impl Project {
             .find(|tree| tree.read(cx).root_name() == root_name)
     }
 
-    pub fn project_group_key(&self, cx: &App) -> ProjectGroupKey {
-        let roots = self
-            .visible_worktrees(cx)
-            .map(|worktree| {
-                let snapshot = worktree.read(cx).snapshot();
-                snapshot
-                    .root_repo_common_dir()
-                    .and_then(|dir| Some(dir.parent()?.to_path_buf()))
-                    .unwrap_or(snapshot.abs_path().to_path_buf())
-            })
-            .collect::<Vec<_>>();
-        let host = self.remote_connection_options(cx);
-        let path_list = PathList::new(&roots);
-        ProjectGroupKey::new(host, path_list)
+    fn emit_group_key_changed_if_needed(&mut self, cx: &mut Context<Self>) {
+        let new_worktree_paths = self.worktree_paths(cx);
+        if new_worktree_paths != self.last_worktree_paths {
+            let old_worktree_paths =
+                std::mem::replace(&mut self.last_worktree_paths, new_worktree_paths);
+            cx.emit(Event::WorktreePathsChanged { old_worktree_paths });
+        }
     }
 
     #[inline]
@@ -3683,9 +3718,11 @@ impl Project {
             WorktreeStoreEvent::WorktreeAdded(worktree) => {
                 self.on_worktree_added(worktree, cx);
                 cx.emit(Event::WorktreeAdded(worktree.read(cx).id()));
+                self.emit_group_key_changed_if_needed(cx);
             }
             WorktreeStoreEvent::WorktreeRemoved(_, id) => {
                 cx.emit(Event::WorktreeRemoved(*id));
+                self.emit_group_key_changed_if_needed(cx);
             }
             WorktreeStoreEvent::WorktreeReleased(_, id) => {
                 self.on_worktree_released(*id, cx);
@@ -3705,6 +3742,7 @@ impl Project {
             WorktreeStoreEvent::WorktreeUpdatedGitRepositories(_, _) => {}
             WorktreeStoreEvent::WorktreeUpdatedRootRepoCommonDir(worktree_id) => {
                 cx.emit(Event::WorktreeUpdatedRootRepoCommonDir(*worktree_id));
+                self.emit_group_key_changed_if_needed(cx);
             }
         }
     }
@@ -6109,6 +6147,14 @@ impl Project {
                 worktree.read(cx).entry_for_path(rel_path).is_some()
             })
     }
+
+    pub fn worktree_paths(&self, cx: &App) -> WorktreePaths {
+        self.worktree_store.read(cx).paths(cx)
+    }
+
+    pub fn project_group_key(&self, cx: &App) -> ProjectGroupKey {
+        ProjectGroupKey::from_project(self, cx)
+    }
 }
 
 /// Identifies a project group by a set of paths the workspaces in this group
@@ -6116,7 +6162,7 @@ impl Project {
 ///
 /// Paths are mapped to their main worktree path first so we can group
 /// workspaces by main repos.
-#[derive(PartialEq, Eq, Hash, Clone, Debug)]
+#[derive(PartialEq, Eq, Hash, Clone, Debug, Default)]
 pub struct ProjectGroupKey {
     /// The paths of the main worktrees for this project group.
     paths: PathList,
@@ -6131,6 +6177,25 @@ impl ProjectGroupKey {
         Self { paths, host }
     }
 
+    pub fn from_project(project: &Project, cx: &App) -> Self {
+        let paths = project.worktree_paths(cx);
+        let host = project.remote_connection_options(cx);
+        Self {
+            paths: paths.main_worktree_path_list().clone(),
+            host,
+        }
+    }
+
+    pub fn from_worktree_paths(
+        paths: &WorktreePaths,
+        host: Option<RemoteConnectionOptions>,
+    ) -> Self {
+        Self {
+            paths: paths.main_worktree_path_list().clone(),
+            host,
+        }
+    }
+
     pub fn path_list(&self) -> &PathList {
         &self.paths
     }
@@ -6140,7 +6205,7 @@ impl ProjectGroupKey {
         path_detail_map: &std::collections::HashMap<PathBuf, usize>,
     ) -> SharedString {
         let mut names = Vec::with_capacity(self.paths.paths().len());
-        for abs_path in self.paths.paths() {
+        for abs_path in self.paths.ordered_paths() {
             let detail = path_detail_map.get(abs_path).copied().unwrap_or(0);
             let suffix = path_suffix(abs_path, detail);
             if !suffix.is_empty() {

crates/project/src/worktree_store.rs 🔗

@@ -24,6 +24,7 @@ use rpc::{
 use text::ReplicaId;
 use util::{
     ResultExt,
+    path_list::PathList,
     paths::{PathStyle, RemotePathBuf, SanitizedPath},
     rel_path::RelPath,
 };
@@ -34,6 +35,121 @@ use worktree::{
 
 use crate::{ProjectPath, trusted_worktrees::TrustedWorktrees};
 
+/// The current paths for a project's worktrees. Each folder path has a corresponding
+/// main worktree path at the same position. The two lists are always the
+/// same length and are modified together via `add_path` / `remove_main_path`.
+///
+/// For non-linked worktrees, the main path and folder path are identical.
+/// For linked worktrees, the main path is the original repo and the folder
+/// path is the linked worktree location.
+#[derive(Default, Debug, Clone)]
+pub struct WorktreePaths {
+    paths: PathList,
+    main_paths: PathList,
+}
+
+impl PartialEq for WorktreePaths {
+    fn eq(&self, other: &Self) -> bool {
+        self.paths == other.paths && self.main_paths == other.main_paths
+    }
+}
+
+impl WorktreePaths {
+    /// Build from two parallel `PathList`s that already share the same
+    /// insertion order. Used for deserialization from DB.
+    ///
+    /// Returns an error if the two lists have different lengths, which
+    /// indicates corrupted data from a prior migration bug.
+    pub fn from_path_lists(
+        main_worktree_paths: PathList,
+        folder_paths: PathList,
+    ) -> anyhow::Result<Self> {
+        anyhow::ensure!(
+            main_worktree_paths.paths().len() == folder_paths.paths().len(),
+            "main_worktree_paths has {} entries but folder_paths has {}",
+            main_worktree_paths.paths().len(),
+            folder_paths.paths().len(),
+        );
+        Ok(Self {
+            paths: folder_paths,
+            main_paths: main_worktree_paths,
+        })
+    }
+
+    /// Build for non-linked worktrees where main == folder for every path.
+    pub fn from_folder_paths(folder_paths: &PathList) -> Self {
+        Self {
+            paths: folder_paths.clone(),
+            main_paths: folder_paths.clone(),
+        }
+    }
+
+    pub fn is_empty(&self) -> bool {
+        self.paths.is_empty()
+    }
+
+    /// The folder paths (for workspace matching / `threads_by_paths` index).
+    pub fn folder_path_list(&self) -> &PathList {
+        &self.paths
+    }
+
+    /// The main worktree paths (for group key / `threads_by_main_paths` index).
+    pub fn main_worktree_path_list(&self) -> &PathList {
+        &self.main_paths
+    }
+
+    /// Iterate the (main_worktree_path, folder_path) pairs in insertion order.
+    pub fn ordered_pairs(&self) -> impl Iterator<Item = (&PathBuf, &PathBuf)> {
+        self.main_paths
+            .ordered_paths()
+            .zip(self.paths.ordered_paths())
+    }
+
+    /// Add a new path pair. If the exact (main, folder) pair already exists,
+    /// this is a no-op. Rebuilds both internal `PathList`s to maintain
+    /// consistent ordering.
+    pub fn add_path(&mut self, main_path: &Path, folder_path: &Path) {
+        let already_exists = self
+            .ordered_pairs()
+            .any(|(m, f)| m.as_path() == main_path && f.as_path() == folder_path);
+        if already_exists {
+            return;
+        }
+        let (mut mains, mut folders): (Vec<PathBuf>, Vec<PathBuf>) = self
+            .ordered_pairs()
+            .map(|(m, f)| (m.clone(), f.clone()))
+            .unzip();
+        mains.push(main_path.to_path_buf());
+        folders.push(folder_path.to_path_buf());
+        self.main_paths = PathList::new(&mains);
+        self.paths = PathList::new(&folders);
+    }
+
+    /// Remove all pairs whose main worktree path matches the given path.
+    /// This removes the corresponding entries from both lists.
+    pub fn remove_main_path(&mut self, main_path: &Path) {
+        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = self
+            .ordered_pairs()
+            .filter(|(m, _)| m.as_path() != main_path)
+            .map(|(m, f)| (m.clone(), f.clone()))
+            .unzip();
+        self.main_paths = PathList::new(&mains);
+        self.paths = PathList::new(&folders);
+    }
+
+    /// Remove all pairs whose folder path matches the given path.
+    /// This removes the corresponding entries from both lists.
+    pub fn remove_folder_path(&mut self, folder_path: &Path) {
+        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = self
+            .ordered_pairs()
+            .filter(|(_, f)| f.as_path() != folder_path)
+            .map(|(m, f)| (m.clone(), f.clone()))
+            .unzip();
+        self.main_paths = PathList::new(&mains);
+        self.paths = PathList::new(&folders);
+    }
+}
+
 enum WorktreeStoreState {
     Local {
         fs: Arc<dyn Fs>,
@@ -814,7 +930,7 @@ impl WorktreeStore {
                     // The worktree root itself has been deleted (for single-file worktrees)
                     // The worktree will be removed via the observe_release callback
                 }
-                worktree::Event::UpdatedRootRepoCommonDir => {
+                worktree::Event::UpdatedRootRepoCommonDir { .. } => {
                     cx.emit(WorktreeStoreEvent::WorktreeUpdatedRootRepoCommonDir(
                         worktree_id,
                     ));
@@ -1263,6 +1379,32 @@ impl WorktreeStore {
             WorktreeStoreState::Remote { .. } => None,
         }
     }
+
+    pub fn paths(&self, cx: &App) -> WorktreePaths {
+        let (mains, folders): (Vec<PathBuf>, Vec<PathBuf>) = self
+            .visible_worktrees(cx)
+            .filter(|worktree| {
+                let worktree = worktree.read(cx);
+                // Remote worktrees that haven't received their first update
+                // don't have enough data to contribute yet.
+                !worktree.is_remote() || worktree.root_entry().is_some()
+            })
+            .map(|worktree| {
+                let snapshot = worktree.read(cx).snapshot();
+                let folder_path = snapshot.abs_path().to_path_buf();
+                let main_path = snapshot
+                    .root_repo_common_dir()
+                    .and_then(|dir| Some(dir.parent()?.to_path_buf()))
+                    .unwrap_or_else(|| folder_path.clone());
+                (main_path, folder_path)
+            })
+            .unzip();
+
+        WorktreePaths {
+            paths: PathList::new(&folders),
+            main_paths: PathList::new(&mains),
+        }
+    }
 }
 
 #[derive(Clone, Debug)]

crates/recent_projects/src/recent_projects.rs 🔗

@@ -32,11 +32,12 @@ use picker::{
     Picker, PickerDelegate,
     highlighted_match_with_paths::{HighlightedMatch, HighlightedMatchWithPaths},
 };
-use project::{ProjectGroupKey, Worktree, git_store::Repository};
+use project::{Worktree, git_store::Repository};
 pub use remote_connections::RemoteSettings;
 pub use remote_servers::RemoteServerProjects;
 use settings::{Settings, WorktreeId};
 use ui_input::ErasedEditor;
+use workspace::ProjectGroupKey;
 
 use dev_container::{DevContainerContext, find_devcontainer_configs};
 use ui::{
@@ -380,7 +381,7 @@ pub fn init(cx: &mut App) {
                     multi_workspace
                         .update(cx, |multi_workspace, window, cx| {
                             let window_project_groups: Vec<ProjectGroupKey> =
-                                multi_workspace.project_group_keys().cloned().collect();
+                                multi_workspace.project_group_keys();
 
                             let workspace = multi_workspace.workspace().clone();
                             workspace.update(cx, |workspace, cx| {
@@ -1613,11 +1614,22 @@ impl PickerDelegate for RecentProjectsDelegate {
                     .border_t_1()
                     .border_color(cx.theme().colors().border_variant)
                     .child({
-                        let open_action = workspace::Open::default();
+                        let open_action = workspace::Open {
+                            create_new_window: self.create_new_window,
+                        };
                         Button::new("open_local_folder", "Open Local Project")
                             .key_binding(KeyBinding::for_action_in(&open_action, &focus_handle, cx))
-                            .on_click(move |_, window, cx| {
-                                window.dispatch_action(open_action.boxed_clone(), cx)
+                            .on_click({
+                                let workspace = self.workspace.clone();
+                                let create_new_window = self.create_new_window;
+                                move |_, window, cx| {
+                                    open_local_project(
+                                        workspace.clone(),
+                                        create_new_window,
+                                        window,
+                                        cx,
+                                    );
+                                }
                             })
                     })
                     .child(
@@ -1764,6 +1776,9 @@ impl PickerDelegate for RecentProjectsDelegate {
                         )
                         .menu({
                             let focus_handle = focus_handle.clone();
+                            let workspace_handle = self.workspace.clone();
+                            let create_new_window = self.create_new_window;
+                            let open_action = workspace::Open { create_new_window };
                             let show_add_to_workspace = match selected_entry {
                                 Some(ProjectPickerEntry::RecentProject(hit)) => self
                                     .workspaces
@@ -1778,6 +1793,8 @@ impl PickerDelegate for RecentProjectsDelegate {
                             move |window, cx| {
                                 Some(ContextMenu::build(window, cx, {
                                     let focus_handle = focus_handle.clone();
+                                    let workspace_handle = workspace_handle.clone();
+                                    let open_action = open_action.clone();
                                     move |menu, _, _| {
                                         menu.context(focus_handle)
                                             .when(show_add_to_workspace, |menu| {
@@ -1787,9 +1804,20 @@ impl PickerDelegate for RecentProjectsDelegate {
                                                 )
                                                 .separator()
                                             })
-                                            .action(
+                                            .entry(
                                                 "Open Local Project",
-                                                workspace::Open::default().boxed_clone(),
+                                                Some(open_action.boxed_clone()),
+                                                {
+                                                    let workspace_handle = workspace_handle.clone();
+                                                    move |window, cx| {
+                                                        open_local_project(
+                                                            workspace_handle.clone(),
+                                                            create_new_window,
+                                                            window,
+                                                            cx,
+                                                        );
+                                                    }
+                                                },
                                             )
                                             .action(
                                                 "Open Remote Project",
@@ -1869,6 +1897,67 @@ pub(crate) fn highlights_for_path(
         },
     )
 }
+fn open_local_project(
+    workspace: WeakEntity<Workspace>,
+    create_new_window: bool,
+    window: &mut Window,
+    cx: &mut App,
+) {
+    use gpui::PathPromptOptions;
+    use project::DirectoryLister;
+
+    let Some(workspace) = workspace.upgrade() else {
+        return;
+    };
+
+    let paths = workspace.update(cx, |workspace, cx| {
+        workspace.prompt_for_open_path(
+            PathPromptOptions {
+                files: true,
+                directories: true,
+                multiple: false,
+                prompt: None,
+            },
+            DirectoryLister::Local(
+                workspace.project().clone(),
+                workspace.app_state().fs.clone(),
+            ),
+            window,
+            cx,
+        )
+    });
+
+    let multi_workspace_handle = window.window_handle().downcast::<MultiWorkspace>();
+    window
+        .spawn(cx, async move |cx| {
+            let Some(paths) = paths.await.log_err().flatten() else {
+                return;
+            };
+            if !create_new_window {
+                if let Some(handle) = multi_workspace_handle {
+                    if let Some(task) = handle
+                        .update(cx, |multi_workspace, window, cx| {
+                            multi_workspace.open_project(paths, OpenMode::Activate, window, cx)
+                        })
+                        .log_err()
+                    {
+                        task.await.log_err();
+                    }
+                    return;
+                }
+            }
+            if let Some(task) = workspace
+                .update_in(cx, |workspace, window, cx| {
+                    workspace.open_workspace_for_paths(OpenMode::NewWindow, paths, window, cx)
+                })
+                .log_err()
+            {
+                task.await.log_err();
+            }
+        })
+        .detach();
+}
+
 impl RecentProjectsDelegate {
     fn add_project_to_workspace(
         &mut self,
@@ -2032,9 +2121,10 @@ impl RecentProjectsDelegate {
 
 #[cfg(test)]
 mod tests {
-    use gpui::{TestAppContext, VisualTestContext};
+    use gpui::{TestAppContext, UpdateGlobal, VisualTestContext};
 
     use serde_json::json;
+    use settings::SettingsStore;
     use util::path;
     use workspace::{AppState, open_paths};
 
@@ -2227,6 +2317,159 @@ mod tests {
             .unwrap();
     }
 
+    #[gpui::test]
+    async fn test_open_local_project_reuses_multi_workspace_window(cx: &mut TestAppContext) {
+        let app_state = init_test(cx);
+
+        // Disable system path prompts so the injected mock is used.
+        cx.update(|cx| {
+            SettingsStore::update_global(cx, |store, cx| {
+                store.update_user_settings(cx, |settings| {
+                    settings.workspace.use_system_path_prompts = Some(false);
+                });
+            });
+        });
+
+        app_state
+            .fs
+            .as_fake()
+            .insert_tree(
+                path!("/initial-project"),
+                json!({ "src": { "main.rs": "" } }),
+            )
+            .await;
+        app_state
+            .fs
+            .as_fake()
+            .insert_tree(path!("/new-project"), json!({ "lib": { "mod.rs": "" } }))
+            .await;
+
+        cx.update(|cx| {
+            open_paths(
+                &[PathBuf::from(path!("/initial-project"))],
+                app_state.clone(),
+                workspace::OpenOptions::default(),
+                cx,
+            )
+        })
+        .await
+        .unwrap();
+
+        let initial_window_count = cx.update(|cx| cx.windows().len());
+        assert_eq!(initial_window_count, 1);
+
+        let multi_workspace = cx.update(|cx| cx.windows()[0].downcast::<MultiWorkspace>().unwrap());
+        cx.run_until_parked();
+
+        let workspace = multi_workspace
+            .read_with(cx, |mw, _| mw.workspace().clone())
+            .unwrap();
+
+        // Set up the prompt mock to return the new project path.
+        workspace.update(cx, |workspace, _cx| {
+            workspace.set_prompt_for_open_path(Box::new(|_, _, _, _| {
+                let (tx, rx) = futures::channel::oneshot::channel();
+                tx.send(Some(vec![PathBuf::from(path!("/new-project"))]))
+                    .ok();
+                rx
+            }));
+        });
+
+        // Call open_local_project with create_new_window: false.
+        let weak_workspace = workspace.downgrade();
+        multi_workspace
+            .update(cx, |_, window, cx| {
+                open_local_project(weak_workspace, false, window, cx);
+            })
+            .unwrap();
+
+        cx.run_until_parked();
+
+        // Should NOT have opened a new window.
+        let final_window_count = cx.update(|cx| cx.windows().len());
+        assert_eq!(
+            final_window_count, initial_window_count,
+            "open_local_project with create_new_window=false should reuse the current multi-workspace window"
+        );
+    }
+
+    #[gpui::test]
+    async fn test_open_local_project_new_window_creates_new_window(cx: &mut TestAppContext) {
+        let app_state = init_test(cx);
+
+        // Disable system path prompts so the injected mock is used.
+        cx.update(|cx| {
+            SettingsStore::update_global(cx, |store, cx| {
+                store.update_user_settings(cx, |settings| {
+                    settings.workspace.use_system_path_prompts = Some(false);
+                });
+            });
+        });
+
+        app_state
+            .fs
+            .as_fake()
+            .insert_tree(
+                path!("/initial-project"),
+                json!({ "src": { "main.rs": "" } }),
+            )
+            .await;
+        app_state
+            .fs
+            .as_fake()
+            .insert_tree(path!("/new-project"), json!({ "lib": { "mod.rs": "" } }))
+            .await;
+
+        cx.update(|cx| {
+            open_paths(
+                &[PathBuf::from(path!("/initial-project"))],
+                app_state.clone(),
+                workspace::OpenOptions::default(),
+                cx,
+            )
+        })
+        .await
+        .unwrap();
+
+        let initial_window_count = cx.update(|cx| cx.windows().len());
+        assert_eq!(initial_window_count, 1);
+
+        let multi_workspace = cx.update(|cx| cx.windows()[0].downcast::<MultiWorkspace>().unwrap());
+        cx.run_until_parked();
+
+        let workspace = multi_workspace
+            .read_with(cx, |mw, _| mw.workspace().clone())
+            .unwrap();
+
+        // Set up the prompt mock to return the new project path.
+        workspace.update(cx, |workspace, _cx| {
+            workspace.set_prompt_for_open_path(Box::new(|_, _, _, _| {
+                let (tx, rx) = futures::channel::oneshot::channel();
+                tx.send(Some(vec![PathBuf::from(path!("/new-project"))]))
+                    .ok();
+                rx
+            }));
+        });
+
+        // Call open_local_project with create_new_window: true.
+        let weak_workspace = workspace.downgrade();
+        multi_workspace
+            .update(cx, |_, window, cx| {
+                open_local_project(weak_workspace, true, window, cx);
+            })
+            .unwrap();
+
+        cx.run_until_parked();
+
+        // Should have opened a new window.
+        let final_window_count = cx.update(|cx| cx.windows().len());
+        assert_eq!(
+            final_window_count,
+            initial_window_count + 1,
+            "open_local_project with create_new_window=true should open a new window"
+        );
+    }
+
     fn init_test(cx: &mut TestAppContext) -> Arc<AppState> {
         cx.update(|cx| {
             let state = AppState::test(cx);

crates/recent_projects/src/sidebar_recent_projects.rs 🔗

@@ -10,15 +10,14 @@ use picker::{
     Picker, PickerDelegate,
     highlighted_match_with_paths::{HighlightedMatch, HighlightedMatchWithPaths},
 };
-use project::ProjectGroupKey;
 use remote::RemoteConnectionOptions;
 use settings::Settings;
 use ui::{KeyBinding, ListItem, ListItemSpacing, Tooltip, prelude::*};
 use ui_input::ErasedEditor;
 use util::{ResultExt, paths::PathExt};
 use workspace::{
-    MultiWorkspace, OpenMode, OpenOptions, PathList, SerializedWorkspaceLocation, Workspace,
-    WorkspaceDb, WorkspaceId, notifications::DetachAndPromptErr,
+    MultiWorkspace, OpenMode, OpenOptions, PathList, ProjectGroupKey, SerializedWorkspaceLocation,
+    Workspace, WorkspaceDb, WorkspaceId, notifications::DetachAndPromptErr,
 };
 
 use zed_actions::OpenRemote;

crates/rpc/src/proto_client.rs 🔗

@@ -547,3 +547,43 @@ fn to_any_envelope<T: EnvelopedMessage>(
         payload: response,
     }) as Box<_>
 }
+
+#[cfg(any(test, feature = "test-support"))]
+pub struct NoopProtoClient {
+    handler_set: parking_lot::Mutex<ProtoMessageHandlerSet>,
+}
+
+#[cfg(any(test, feature = "test-support"))]
+impl NoopProtoClient {
+    pub fn new() -> Arc<Self> {
+        Arc::new(Self {
+            handler_set: parking_lot::Mutex::new(ProtoMessageHandlerSet::default()),
+        })
+    }
+}
+
+#[cfg(any(test, feature = "test-support"))]
+impl ProtoClient for NoopProtoClient {
+    fn request(
+        &self,
+        _: proto::Envelope,
+        _: &'static str,
+    ) -> futures::future::BoxFuture<'static, Result<proto::Envelope>> {
+        unimplemented!()
+    }
+    fn send(&self, _: proto::Envelope, _: &'static str) -> Result<()> {
+        Ok(())
+    }
+    fn send_response(&self, _: proto::Envelope, _: &'static str) -> Result<()> {
+        Ok(())
+    }
+    fn message_handler_set(&self) -> &parking_lot::Mutex<ProtoMessageHandlerSet> {
+        &self.handler_set
+    }
+    fn is_via_collab(&self) -> bool {
+        false
+    }
+    fn has_wsl_interop(&self) -> bool {
+        false
+    }
+}

crates/sidebar/Cargo.toml 🔗

@@ -65,6 +65,7 @@ git.workspace = true
 gpui = { workspace = true, features = ["test-support"] }
 client = { workspace = true, features = ["test-support"] }
 clock = { workspace = true, features = ["test-support"] }
+db = { workspace = true, features = ["test-support"] }
 http_client = { workspace = true, features = ["test-support"] }
 node_runtime = { workspace = true, features = ["test-support"] }
 project = { workspace = true, features = ["test-support"] }

crates/sidebar/src/sidebar.rs 🔗

@@ -4,14 +4,14 @@ use acp_thread::ThreadStatus;
 use action_log::DiffStats;
 use agent_client_protocol::{self as acp};
 use agent_settings::AgentSettings;
-use agent_ui::thread_metadata_store::{ThreadMetadata, ThreadMetadataStore, ThreadWorktreePaths};
+use agent_ui::thread_metadata_store::{ThreadMetadata, ThreadMetadataStore, WorktreePaths};
 use agent_ui::thread_worktree_archive;
 use agent_ui::threads_archive_view::{
     ThreadsArchiveView, ThreadsArchiveViewEvent, format_history_entry_timestamp,
 };
 use agent_ui::{
-    AcpThreadImportOnboarding, Agent, AgentPanel, AgentPanelEvent, DEFAULT_THREAD_TITLE, DraftId,
-    NewThread, RemoveSelectedThread, ThreadImportModal,
+    AcpThreadImportOnboarding, Agent, AgentPanel, AgentPanelEvent, DEFAULT_THREAD_TITLE, NewThread,
+    RemoveSelectedThread, ThreadId, ThreadImportModal,
 };
 use chrono::{DateTime, Utc};
 use editor::Editor;
@@ -23,9 +23,7 @@ use gpui::{
 use menu::{
     Cancel, Confirm, SelectChild, SelectFirst, SelectLast, SelectNext, SelectParent, SelectPrevious,
 };
-use project::{
-    AgentId, AgentRegistryStore, Event as ProjectEvent, ProjectGroupKey, linked_worktree_short_name,
-};
+use project::{AgentId, AgentRegistryStore, Event as ProjectEvent, linked_worktree_short_name};
 use recent_projects::sidebar_recent_projects::SidebarRecentProjects;
 use remote::RemoteConnectionOptions;
 use ui::utils::platform_title_bar_height;
@@ -46,7 +44,7 @@ use util::ResultExt as _;
 use util::path_list::PathList;
 use workspace::{
     AddFolderToProject, CloseWindow, FocusWorkspaceSidebar, MultiWorkspace, MultiWorkspaceEvent,
-    NextProject, NextThread, Open, PreviousProject, PreviousThread, SerializedProjectGroupKey,
+    NextProject, NextThread, Open, PreviousProject, PreviousThread, ProjectGroupKey,
     ShowFewerThreads, ShowMoreThreads, Sidebar as WorkspaceSidebar, SidebarSide, Toast,
     ToggleWorkspaceSidebar, Workspace, notifications::NotificationId, sidebar_side_context_menu,
 };
@@ -96,10 +94,6 @@ struct SerializedSidebar {
     #[serde(default)]
     width: Option<f32>,
     #[serde(default)]
-    collapsed_groups: Vec<SerializedProjectGroupKey>,
-    #[serde(default)]
-    expanded_groups: Vec<(SerializedProjectGroupKey, usize)>,
-    #[serde(default)]
     active_view: SerializedSidebarView,
 }
 
@@ -116,45 +110,34 @@ enum ArchiveWorktreeOutcome {
 }
 
 #[derive(Clone, Debug)]
-enum ActiveEntry {
-    Thread {
-        session_id: acp::SessionId,
-        workspace: Entity<Workspace>,
-    },
-    Draft {
-        id: DraftId,
-        workspace: Entity<Workspace>,
-    },
+struct ActiveEntry {
+    thread_id: agent_ui::ThreadId,
+    /// Stable remote identifier, used for matching when thread_id
+    /// differs (e.g. after cross-window activation creates a new
+    /// local ThreadId).
+    session_id: Option<acp::SessionId>,
+    workspace: Entity<Workspace>,
 }
 
 impl ActiveEntry {
     fn workspace(&self) -> &Entity<Workspace> {
-        match self {
-            ActiveEntry::Thread { workspace, .. } => workspace,
-            ActiveEntry::Draft { workspace, .. } => workspace,
-        }
-    }
-
-    fn is_active_thread(&self, session_id: &acp::SessionId) -> bool {
-        matches!(self, ActiveEntry::Thread { session_id: id, .. } if id == session_id)
+        &self.workspace
     }
 
-    fn is_active_draft(&self, draft_id: DraftId) -> bool {
-        matches!(self, ActiveEntry::Draft { id, .. } if *id == draft_id)
+    fn is_active_thread(&self, thread_id: &agent_ui::ThreadId) -> bool {
+        self.thread_id == *thread_id
     }
 
     fn matches_entry(&self, entry: &ListEntry) -> bool {
-        match (self, entry) {
-            (ActiveEntry::Thread { session_id, .. }, ListEntry::Thread(thread)) => {
-                thread.metadata.session_id == *session_id
+        match entry {
+            ListEntry::Thread(thread) => {
+                self.thread_id == thread.metadata.thread_id
+                    || self
+                        .session_id
+                        .as_ref()
+                        .zip(thread.metadata.session_id.as_ref())
+                        .is_some_and(|(a, b)| a == b)
             }
-            (
-                ActiveEntry::Draft { id, .. },
-                ListEntry::DraftThread {
-                    draft_id: Some(entry_id),
-                    ..
-                },
-            ) => *id == *entry_id,
             _ => false,
         }
     }
@@ -214,6 +197,7 @@ struct ThreadEntry {
     is_live: bool,
     is_background: bool,
     is_title_generating: bool,
+    is_draft: bool,
     highlight_positions: Vec<usize>,
     worktrees: Vec<WorktreeInfo>,
     diff_stats: DiffStats,
@@ -226,7 +210,7 @@ impl ThreadEntry {
     /// but if we have a correspond thread already loaded we want to apply the
     /// live information.
     fn apply_active_info(&mut self, info: &ActiveThreadInfo) {
-        self.metadata.title = info.title.clone();
+        self.metadata.title = Some(info.title.clone());
         self.status = info.status;
         self.icon = info.icon;
         self.icon_from_external_svg = info.icon_from_external_svg.clone();
@@ -253,21 +237,13 @@ enum ListEntry {
         key: ProjectGroupKey,
         is_fully_expanded: bool,
     },
-    DraftThread {
-        /// `None` for placeholder entries in empty groups with no open
-        /// workspace. `Some` for drafts backed by an AgentPanel.
-        draft_id: Option<DraftId>,
-        key: project::ProjectGroupKey,
-        workspace: Option<Entity<Workspace>>,
-        worktrees: Vec<WorktreeInfo>,
-    },
 }
 
 #[cfg(test)]
 impl ListEntry {
     fn session_id(&self) -> Option<&acp::SessionId> {
         match self {
-            ListEntry::Thread(thread_entry) => Some(&thread_entry.metadata.session_id),
+            ListEntry::Thread(thread_entry) => thread_entry.metadata.session_id.as_ref(),
             _ => None,
         }
     }
@@ -282,11 +258,9 @@ impl ListEntry {
                 ThreadEntryWorkspace::Open(ws) => vec![ws.clone()],
                 ThreadEntryWorkspace::Closed { .. } => Vec::new(),
             },
-            ListEntry::DraftThread { workspace, .. } => workspace.iter().cloned().collect(),
             ListEntry::ProjectHeader { key, .. } => multi_workspace
                 .workspaces_for_project_group(key, cx)
-                .cloned()
-                .collect(),
+                .unwrap_or_default(),
             ListEntry::ViewMore { .. } => Vec::new(),
         }
     }
@@ -301,14 +275,14 @@ impl From<ThreadEntry> for ListEntry {
 #[derive(Default)]
 struct SidebarContents {
     entries: Vec<ListEntry>,
-    notified_threads: HashSet<acp::SessionId>,
+    notified_threads: HashSet<agent_ui::ThreadId>,
     project_header_indices: Vec<usize>,
     has_open_projects: bool,
 }
 
 impl SidebarContents {
-    fn is_thread_notified(&self, session_id: &acp::SessionId) -> bool {
-        self.notified_threads.contains(session_id)
+    fn is_thread_notified(&self, thread_id: &agent_ui::ThreadId) -> bool {
+        self.notified_threads.contains(thread_id)
     }
 }
 
@@ -369,7 +343,7 @@ fn workspace_path_list(workspace: &Entity<Workspace>, cx: &App) -> PathList {
 /// wouldn't identify which main project it belongs to, the main project
 /// name is prefixed for disambiguation (e.g. `project:feature`).
 ///
-fn worktree_info_from_thread_paths(worktree_paths: &ThreadWorktreePaths) -> Vec<WorktreeInfo> {
+fn worktree_info_from_thread_paths(worktree_paths: &WorktreePaths) -> Vec<WorktreeInfo> {
     let mut infos: Vec<WorktreeInfo> = Vec::new();
     let mut linked_short_names: Vec<(SharedString, SharedString)> = Vec::new();
     let mut unique_main_count = HashSet::new();
@@ -454,8 +428,7 @@ pub struct Sidebar {
     /// Tracks which sidebar entry is currently active (highlighted).
     active_entry: Option<ActiveEntry>,
     hovered_thread_index: Option<usize>,
-    collapsed_groups: HashSet<ProjectGroupKey>,
-    expanded_groups: HashMap<ProjectGroupKey, usize>,
+
     /// Updated only in response to explicit user actions (clicking a
     /// thread, confirming in the thread switcher, etc.) — never from
     /// background data changes. Used to sort the thread switcher popup.
@@ -463,16 +436,17 @@ pub struct Sidebar {
     /// Updated when the user presses a key to send or queue a message.
     /// Used for sorting threads in the sidebar and as a secondary sort
     /// key in the thread switcher.
-    thread_last_message_sent_or_queued: HashMap<acp::SessionId, DateTime<Utc>>,
+    thread_last_message_sent_or_queued: HashMap<agent_ui::ThreadId, DateTime<Utc>>,
     thread_switcher: Option<Entity<ThreadSwitcher>>,
     _thread_switcher_subscriptions: Vec<gpui::Subscription>,
-    pending_remote_thread_activation: Option<acp::SessionId>,
+    pending_thread_activation: Option<agent_ui::ThreadId>,
     view: SidebarView,
-    restoring_tasks: HashMap<acp::SessionId, Task<()>>,
+    restoring_tasks: HashMap<agent_ui::ThreadId, Task<()>>,
     recent_projects_popover_handle: PopoverMenuHandle<SidebarRecentProjects>,
     project_header_menu_ix: Option<usize>,
     _subscriptions: Vec<gpui::Subscription>,
-    _draft_observation: Option<gpui::Subscription>,
+    _draft_observations: Vec<gpui::Subscription>,
+    reconciling: bool,
 }
 
 impl Sidebar {
@@ -497,43 +471,24 @@ impl Sidebar {
             window,
             |this, _multi_workspace, event: &MultiWorkspaceEvent, window, cx| match event {
                 MultiWorkspaceEvent::ActiveWorkspaceChanged => {
-                    this.observe_draft_editor(cx);
+                    this.sync_active_entry_from_active_workspace(cx);
+                    this.observe_draft_editors(cx);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
                 MultiWorkspaceEvent::WorkspaceAdded(workspace) => {
                     this.subscribe_to_workspace(workspace, window, cx);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
                 MultiWorkspaceEvent::WorkspaceRemoved(_) => {
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
-                MultiWorkspaceEvent::WorktreePathAdded {
-                    old_main_paths,
-                    added_path,
-                } => {
-                    let added_path = added_path.clone();
-                    ThreadMetadataStore::global(cx).update(cx, |store, cx| {
-                        store.change_worktree_paths(
-                            old_main_paths,
-                            |paths| paths.add_path(&added_path, &added_path),
-                            cx,
-                        );
-                    });
-                    this.update_entries(cx);
-                }
-                MultiWorkspaceEvent::WorktreePathRemoved {
-                    old_main_paths,
-                    removed_path,
-                } => {
-                    let removed_path = removed_path.clone();
-                    ThreadMetadataStore::global(cx).update(cx, |store, cx| {
-                        store.change_worktree_paths(
-                            old_main_paths,
-                            |paths| paths.remove_main_path(&removed_path),
-                            cx,
-                        );
-                    });
+                MultiWorkspaceEvent::ProjectGroupKeyUpdated { old_key, new_key } => {
+                    this.move_threads_for_key_change(old_key, new_key, cx);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
             },
         )
@@ -564,6 +519,7 @@ impl Sidebar {
                 this.subscribe_to_workspace(workspace, window, cx);
             }
             this.update_entries(cx);
+            this.reconcile_groups(window, cx);
         });
 
         Self {
@@ -576,19 +532,19 @@ impl Sidebar {
             selection: None,
             active_entry: None,
             hovered_thread_index: None,
-            collapsed_groups: HashSet::new(),
-            expanded_groups: HashMap::new(),
+
             thread_last_accessed: HashMap::new(),
             thread_last_message_sent_or_queued: HashMap::new(),
             thread_switcher: None,
             _thread_switcher_subscriptions: Vec::new(),
-            pending_remote_thread_activation: None,
+            pending_thread_activation: None,
             view: SidebarView::default(),
             restoring_tasks: HashMap::new(),
             recent_projects_popover_handle: PopoverMenuHandle::default(),
             project_header_menu_ix: None,
             _subscriptions: Vec::new(),
-            _draft_observation: None,
+            _draft_observations: Vec::new(),
+            reconciling: false,
         }
     }
 
@@ -596,6 +552,55 @@ impl Sidebar {
         cx.emit(workspace::SidebarEvent::SerializeNeeded);
     }
 
+    fn is_group_collapsed(&self, key: &ProjectGroupKey, cx: &App) -> bool {
+        self.multi_workspace
+            .upgrade()
+            .and_then(|mw| {
+                mw.read(cx)
+                    .group_state_by_key(key)
+                    .map(|state| !state.expanded)
+            })
+            .unwrap_or(false)
+    }
+
+    fn group_extra_batches(&self, key: &ProjectGroupKey, cx: &App) -> usize {
+        self.multi_workspace
+            .upgrade()
+            .and_then(|mw| {
+                mw.read(cx)
+                    .group_state_by_key(key)
+                    .and_then(|state| state.visible_thread_count)
+            })
+            .unwrap_or(0)
+    }
+
+    fn set_group_expanded(&self, key: &ProjectGroupKey, expanded: bool, cx: &mut Context<Self>) {
+        if let Some(mw) = self.multi_workspace.upgrade() {
+            mw.update(cx, |mw, cx| {
+                if let Some(state) = mw.group_state_by_key_mut(key) {
+                    state.expanded = expanded;
+                }
+                mw.serialize(cx);
+            });
+        }
+    }
+
+    fn set_group_visible_thread_count(
+        &self,
+        key: &ProjectGroupKey,
+        count: Option<usize>,
+        cx: &mut Context<Self>,
+    ) {
+        if let Some(mw) = self.multi_workspace.upgrade() {
+            mw.update(cx, |mw, cx| {
+                if let Some(state) = mw.group_state_by_key_mut(key) {
+                    state.visible_thread_count = count;
+                }
+                mw.serialize(cx);
+            });
+        }
+    }
+
     fn is_active_workspace(&self, workspace: &Entity<Workspace>, cx: &App) -> bool {
         self.multi_workspace
             .upgrade()
@@ -609,14 +614,23 @@ impl Sidebar {
         cx: &mut Context<Self>,
     ) {
         let project = workspace.read(cx).project().clone();
+
         cx.subscribe_in(
             &project,
             window,
-            |this, _project, event, _window, cx| match event {
+            |this, project, event, window, cx| match event {
                 ProjectEvent::WorktreeAdded(_)
                 | ProjectEvent::WorktreeRemoved(_)
                 | ProjectEvent::WorktreeOrderChanged => {
+                    this.observe_draft_editors(cx);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
+                }
+                ProjectEvent::WorktreePathsChanged { old_worktree_paths } => {
+                    this.move_thread_paths(project, old_worktree_paths, cx);
+                    this.observe_draft_editors(cx);
+                    this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
                 _ => {}
             },
@@ -645,10 +659,12 @@ impl Sidebar {
         cx.subscribe_in(
             workspace,
             window,
-            |this, _workspace, event: &workspace::Event, _window, cx| {
+            |this, _workspace, event: &workspace::Event, window, cx| {
                 if let workspace::Event::PanelAdded(view) = event {
                     if let Ok(agent_panel) = view.clone().downcast::<AgentPanel>() {
-                        this.subscribe_to_agent_panel(&agent_panel, _window, cx);
+                        this.subscribe_to_agent_panel(&agent_panel, window, cx);
+                        this.update_entries(cx);
+                        this.reconcile_groups(window, cx);
                     }
                 }
             },
@@ -659,8 +675,102 @@ impl Sidebar {
 
         if let Some(agent_panel) = workspace.read(cx).panel::<AgentPanel>(cx) {
             self.subscribe_to_agent_panel(&agent_panel, window, cx);
-            self.observe_draft_editor(cx);
+            self.observe_draft_editors(cx);
+        }
+    }
+
+    fn move_threads_for_key_change(
+        &mut self,
+        old_key: &ProjectGroupKey,
+        new_key: &ProjectGroupKey,
+        cx: &mut Context<Self>,
+    ) {
+        let old_main_paths = old_key.path_list();
+        let new_main_paths = new_key.path_list();
+
+        let added_paths: Vec<PathBuf> = new_main_paths
+            .paths()
+            .iter()
+            .filter(|p| !old_main_paths.paths().contains(p))
+            .cloned()
+            .collect();
+
+        let removed_paths: Vec<PathBuf> = old_main_paths
+            .paths()
+            .iter()
+            .filter(|p| !new_main_paths.paths().contains(p))
+            .cloned()
+            .collect();
+
+        if added_paths.is_empty() && removed_paths.is_empty() {
+            return;
+        }
+
+        let remote_connection = old_key.host();
+        ThreadMetadataStore::global(cx).update(cx, |store, store_cx| {
+            store.change_worktree_paths_by_main(
+                old_main_paths,
+                remote_connection.as_ref(),
+                |paths| {
+                    for path in &added_paths {
+                        paths.add_path(path, path);
+                    }
+                    for path in &removed_paths {
+                        paths.remove_main_path(path);
+                    }
+                },
+                store_cx,
+            );
+        });
+    }
+
+    fn move_thread_paths(
+        &mut self,
+        project: &Entity<project::Project>,
+        old_paths: &WorktreePaths,
+        cx: &mut Context<Self>,
+    ) {
+        let new_paths = project.read(cx).worktree_paths(cx);
+        let old_folder_paths = old_paths.folder_path_list().clone();
+
+        let added_pairs: Vec<_> = new_paths
+            .ordered_pairs()
+            .filter(|(main, folder)| {
+                !old_paths
+                    .ordered_pairs()
+                    .any(|(old_main, old_folder)| old_main == *main && old_folder == *folder)
+            })
+            .map(|(m, f)| (m.clone(), f.clone()))
+            .collect();
+
+        let new_folder_paths = new_paths.folder_path_list();
+        let removed_folder_paths: Vec<PathBuf> = old_folder_paths
+            .paths()
+            .iter()
+            .filter(|p| !new_folder_paths.paths().contains(p))
+            .cloned()
+            .collect();
+
+        if added_pairs.is_empty() && removed_folder_paths.is_empty() {
+            return;
         }
+
+        let remote_connection = project.read(cx).remote_connection_options(cx);
+        ThreadMetadataStore::global(cx).update(cx, |store, store_cx| {
+            store.change_worktree_paths(
+                &old_folder_paths,
+                remote_connection.as_ref(),
+                |paths| {
+                    for (main_path, folder_path) in &added_pairs {
+                        paths.add_path(main_path, folder_path);
+                    }
+                    for path in &removed_folder_paths {
+                        paths.remove_folder_path(path);
+                    }
+                },
+                store_cx,
+            );
+        });
     }
 
     fn subscribe_to_agent_panel(
@@ -672,23 +782,101 @@ impl Sidebar {
         cx.subscribe_in(
             agent_panel,
             window,
-            |this, _agent_panel, event: &AgentPanelEvent, _window, cx| match event {
+            |this, _agent_panel, event: &AgentPanelEvent, window, cx| match event {
                 AgentPanelEvent::ActiveViewChanged => {
-                    this.observe_draft_editor(cx);
+                    let resolved_pending_activation =
+                        this.sync_active_entry_from_panel(_agent_panel, cx);
+                    if resolved_pending_activation {
+                        let active_workspace = this.active_workspace(cx);
+                        if let Some(active_workspace) = active_workspace {
+                            this.clear_empty_group_drafts(&active_workspace, cx);
+                        }
+                    }
+                    this.observe_draft_editors(cx);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
-                AgentPanelEvent::ThreadFocused | AgentPanelEvent::BackgroundThreadChanged => {
+                AgentPanelEvent::ThreadFocused | AgentPanelEvent::RetainedThreadChanged => {
+                    this.sync_active_entry_from_panel(_agent_panel, cx);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
-                AgentPanelEvent::MessageSentOrQueued { session_id } => {
-                    this.record_thread_message_sent(session_id);
+                AgentPanelEvent::MessageSentOrQueued { thread_id } => {
+                    this.record_thread_message_sent(thread_id);
                     this.update_entries(cx);
+                    this.reconcile_groups(window, cx);
                 }
             },
         )
         .detach();
     }
 
+    fn sync_active_entry_from_active_workspace(&mut self, cx: &App) {
+        let panel = self
+            .active_workspace(cx)
+            .and_then(|ws| ws.read(cx).panel::<AgentPanel>(cx));
+        if let Some(panel) = panel {
+            self.sync_active_entry_from_panel(&panel, cx);
+        }
+    }
+
+    /// Syncs `active_entry` from the agent panel's current state.
+    /// Called from `ActiveViewChanged` — the panel has settled into its
+    /// new view, so we can safely read it without race conditions.
+    ///
+    /// Also resolves `pending_thread_activation` when the panel's
+    /// active thread matches the pending activation.
+    fn sync_active_entry_from_panel(&mut self, agent_panel: &Entity<AgentPanel>, cx: &App) -> bool {
+        let Some(active_workspace) = self.active_workspace(cx) else {
+            return false;
+        };
+
+        // Only sync when the event comes from the active workspace's panel.
+        let is_active_panel = active_workspace
+            .read(cx)
+            .panel::<AgentPanel>(cx)
+            .is_some_and(|p| p == *agent_panel);
+        if !is_active_panel {
+            return false;
+        }
+
+        let panel = agent_panel.read(cx);
+
+        if let Some(pending_thread_id) = self.pending_thread_activation {
+            let panel_thread_id = panel
+                .active_conversation_view()
+                .map(|cv| cv.read(cx).parent_id());
+
+            if panel_thread_id == Some(pending_thread_id) {
+                let session_id = panel
+                    .active_agent_thread(cx)
+                    .map(|thread| thread.read(cx).session_id().clone());
+                self.active_entry = Some(ActiveEntry {
+                    thread_id: pending_thread_id,
+                    session_id,
+                    workspace: active_workspace,
+                });
+                self.pending_thread_activation = None;
+                return true;
+            }
+            // Pending activation not yet resolved — keep current active_entry.
+            return false;
+        }
+
+        if let Some(thread_id) = panel.active_thread_id(cx) {
+            let session_id = panel
+                .active_agent_thread(cx)
+                .map(|thread| thread.read(cx).session_id().clone());
+            self.active_entry = Some(ActiveEntry {
+                thread_id,
+                session_id,
+                workspace: active_workspace,
+            });
+        }
+
+        false
+    }
+
     fn observe_docks(&mut self, workspace: &Entity<Workspace>, cx: &mut Context<Self>) {
         let docks: Vec<_> = workspace
             .read(cx)
@@ -713,24 +901,37 @@ impl Sidebar {
         }
     }
 
-    fn observe_draft_editor(&mut self, cx: &mut Context<Self>) {
-        self._draft_observation = self
-            .multi_workspace
-            .upgrade()
-            .and_then(|mw| {
-                let ws = mw.read(cx).workspace();
-                ws.read(cx).panel::<AgentPanel>(cx)
-            })
-            .and_then(|panel| {
-                let cv = panel.read(cx).active_conversation_view()?;
-                let tv = cv.read(cx).active_thread()?;
-                Some(tv.read(cx).message_editor.clone())
-            })
-            .map(|editor| {
-                cx.observe(&editor, |_this, _editor, cx| {
-                    cx.notify();
-                })
-            });
+    fn observe_draft_editors(&mut self, cx: &mut Context<Self>) {
+        let Some(multi_workspace) = self.multi_workspace.upgrade() else {
+            self._draft_observations.clear();
+            return;
+        };
+
+        // Collect conversation views up front to avoid holding a
+        // borrow on `cx` across `cx.observe` calls.
+        let conversation_views: Vec<_> = multi_workspace
+            .read(cx)
+            .workspaces()
+            .filter_map(|ws| ws.read(cx).panel::<AgentPanel>(cx))
+            .flat_map(|panel| panel.read(cx).conversation_views())
+            .collect();
+
+        let mut subscriptions = Vec::with_capacity(conversation_views.len());
+        for cv in conversation_views {
+            if let Some(thread_view) = cv.read(cx).active_thread() {
+                let editor = thread_view.read(cx).message_editor.clone();
+                subscriptions.push(cx.observe(&editor, |this, _editor, cx| {
+                    this.update_entries(cx);
+                }));
+            } else {
+                subscriptions.push(cx.observe(&cv, |this, _cv, cx| {
+                    this.observe_draft_editors(cx);
+                    this.update_entries(cx);
+                }));
+            }
+        }
+
+        self._draft_observations = subscriptions;
     }
 
     fn clean_mention_links(input: &str) -> String {
@@ -858,41 +1059,6 @@ impl Sidebar {
 
         let query = self.filter_editor.read(cx).text(cx);
 
-        // Derive active_entry from the active workspace's agent panel.
-        // A tracked draft (in `draft_threads`) is checked first via
-        // `active_draft_id`. Then we check for a thread with a session_id.
-        // If a thread is mid-load with no session_id yet, we fall back to
-        // `pending_remote_thread_activation` or keep the previous value.
-        if let Some(active_ws) = &active_workspace {
-            if let Some(panel) = active_ws.read(cx).panel::<AgentPanel>(cx) {
-                let panel = panel.read(cx);
-                if let Some(draft_id) = panel.active_draft_id() {
-                    self.active_entry = Some(ActiveEntry::Draft {
-                        id: draft_id,
-                        workspace: active_ws.clone(),
-                    });
-                } else if let Some(session_id) = panel
-                    .active_conversation_view()
-                    .and_then(|cv| cv.read(cx).parent_id(cx))
-                {
-                    if self.pending_remote_thread_activation.as_ref() == Some(&session_id) {
-                        self.pending_remote_thread_activation = None;
-                    }
-                    self.active_entry = Some(ActiveEntry::Thread {
-                        session_id,
-                        workspace: active_ws.clone(),
-                    });
-                } else if let Some(session_id) = self.pending_remote_thread_activation.clone() {
-                    self.active_entry = Some(ActiveEntry::Thread {
-                        session_id,
-                        workspace: active_ws.clone(),
-                    });
-                }
-                // else: conversation is mid-load or panel is
-                // uninitialized — keep previous active_entry.
-            }
-        }
-
         let previous = mem::take(&mut self.contents);
 
         let old_statuses: HashMap<acp::SessionId, AgentThreadStatus> = previous
@@ -900,7 +1066,8 @@ impl Sidebar {
             .iter()
             .filter_map(|entry| match entry {
                 ListEntry::Thread(thread) if thread.is_live => {
-                    Some((thread.metadata.session_id.clone(), thread.status))
+                    let sid = thread.metadata.session_id.clone()?;
+                    Some((sid, thread.status))
                 }
                 _ => None,
             })
@@ -909,7 +1076,9 @@ impl Sidebar {
         let mut entries = Vec::new();
         let mut notified_threads = previous.notified_threads;
         let mut current_session_ids: HashSet<acp::SessionId> = HashSet::new();
+        let mut current_thread_ids: HashSet<agent_ui::ThreadId> = HashSet::new();
         let mut project_header_indices: Vec<usize> = Vec::new();
+        let mut seen_thread_ids: HashSet<agent_ui::ThreadId> = HashSet::new();
 
         let has_open_projects = workspaces
             .iter()
@@ -927,11 +1096,11 @@ impl Sidebar {
             (icon, icon_from_external_svg)
         };
 
-        let groups: Vec<_> = mw.project_groups(cx).collect();
+        let groups = mw.project_groups(cx);
 
         let mut all_paths: Vec<PathBuf> = groups
             .iter()
-            .flat_map(|(key, _)| key.path_list().paths().iter().cloned())
+            .flat_map(|group| group.key.path_list().paths().iter().cloned())
             .collect();
         all_paths.sort();
         all_paths.dedup();
@@ -942,14 +1111,16 @@ impl Sidebar {
         let path_detail_map: HashMap<PathBuf, usize> =
             all_paths.into_iter().zip(path_details).collect();
 
-        for (group_key, group_workspaces) in &groups {
+        for group in &groups {
+            let group_key = &group.key;
+            let group_workspaces = &group.workspaces;
             if group_key.path_list().paths().is_empty() {
                 continue;
             }
 
             let label = group_key.display_name(&path_detail_map);
 
-            let is_collapsed = self.collapsed_groups.contains(&group_key);
+            let is_collapsed = self.is_group_collapsed(group_key, cx);
             let should_load_threads = !is_collapsed || !query.is_empty();
 
             let is_active = active_workspace
@@ -967,7 +1138,6 @@ impl Sidebar {
             let mut waiting_thread_count: usize = 0;
 
             if should_load_threads {
-                let mut seen_session_ids: HashSet<acp::SessionId> = HashSet::new();
                 let thread_store = ThreadMetadataStore::global(cx);
 
                 // Build a lookup from workspace root paths to their workspace
@@ -997,6 +1167,7 @@ impl Sidebar {
                     |row: ThreadMetadata, workspace: ThreadEntryWorkspace| -> ThreadEntry {
                         let (icon, icon_from_external_svg) = resolve_agent_icon(&row.agent_id);
                         let worktrees = worktree_info_from_thread_paths(&row.worktree_paths);
+                        let is_draft = row.is_draft();
                         ThreadEntry {
                             metadata: row,
                             icon,
@@ -1006,6 +1177,7 @@ impl Sidebar {
                             is_live: false,
                             is_background: false,
                             is_title_generating: false,
+                            is_draft,
                             highlight_positions: Vec::new(),
                             worktrees,
                             diff_stats: DiffStats::default(),
@@ -1021,7 +1193,7 @@ impl Sidebar {
                     .entries_for_main_worktree_path(group_key.path_list())
                     .cloned()
                 {
-                    if !seen_session_ids.insert(row.session_id.clone()) {
+                    if !seen_thread_ids.insert(row.thread_id) {
                         continue;
                     }
                     let workspace = resolve_workspace(&row);
@@ -1037,7 +1209,7 @@ impl Sidebar {
                     .entries_for_path(group_key.path_list())
                     .cloned()
                 {
-                    if !seen_session_ids.insert(row.session_id.clone()) {
+                    if !seen_thread_ids.insert(row.thread_id) {
                         continue;
                     }
                     let workspace = resolve_workspace(&row);
@@ -1063,7 +1235,7 @@ impl Sidebar {
                         .entries_for_path(&worktree_path_list)
                         .cloned()
                     {
-                        if !seen_session_ids.insert(row.session_id.clone()) {
+                        if !seen_thread_ids.insert(row.thread_id) {
                             continue;
                         }
                         threads.push(make_thread_entry(
@@ -1093,14 +1265,15 @@ impl Sidebar {
                 // Merge live info into threads and update notification state
                 // in a single pass.
                 for thread in &mut threads {
-                    if let Some(info) = live_info_by_session.get(&thread.metadata.session_id) {
-                        thread.apply_active_info(info);
+                    if let Some(session_id) = &thread.metadata.session_id {
+                        if let Some(info) = live_info_by_session.get(session_id) {
+                            thread.apply_active_info(info);
+                        }
                     }
 
                     let session_id = &thread.metadata.session_id;
-
                     let is_active_thread = self.active_entry.as_ref().is_some_and(|entry| {
-                        entry.is_active_thread(session_id)
+                        entry.is_active_thread(&thread.metadata.thread_id)
                             && active_workspace
                                 .as_ref()
                                 .is_some_and(|active| active == entry.workspace())
@@ -1108,26 +1281,37 @@ impl Sidebar {
 
                     if thread.status == AgentThreadStatus::Completed
                         && !is_active_thread
-                        && old_statuses.get(session_id) == Some(&AgentThreadStatus::Running)
+                        && session_id.as_ref().and_then(|sid| old_statuses.get(sid))
+                            == Some(&AgentThreadStatus::Running)
                     {
-                        notified_threads.insert(session_id.clone());
+                        notified_threads.insert(thread.metadata.thread_id);
                     }
 
                     if is_active_thread && !thread.is_background {
-                        notified_threads.remove(session_id);
+                        notified_threads.remove(&thread.metadata.thread_id);
                     }
                 }
 
                 threads.sort_by(|a, b| {
-                    let a_time = self
-                        .thread_last_message_sent_or_queued
-                        .get(&a.metadata.session_id)
+                    let a_time = a
+                        .metadata
+                        .session_id
+                        .as_ref()
+                        .and_then(|_sid| {
+                            self.thread_last_message_sent_or_queued
+                                .get(&a.metadata.thread_id)
+                        })
                         .copied()
                         .or(a.metadata.created_at)
                         .or(Some(a.metadata.updated_at));
-                    let b_time = self
-                        .thread_last_message_sent_or_queued
-                        .get(&b.metadata.session_id)
+                    let b_time = b
+                        .metadata
+                        .session_id
+                        .as_ref()
+                        .and_then(|_sid| {
+                            self.thread_last_message_sent_or_queued
+                                .get(&b.metadata.thread_id)
+                        })
                         .copied()
                         .or(b.metadata.created_at)
                         .or(Some(b.metadata.updated_at));
@@ -1165,7 +1349,11 @@ impl Sidebar {
 
                 let mut matched_threads: Vec<ThreadEntry> = Vec::new();
                 for mut thread in threads {
-                    let title: &str = &thread.metadata.title;
+                    let title: &str = thread
+                        .metadata
+                        .title
+                        .as_ref()
+                        .map_or(DEFAULT_THREAD_TITLE, |t| t.as_ref());
                     if let Some(positions) = fuzzy_match_positions(&query, title) {
                         thread.highlight_positions = positions;
                     }
@@ -1200,7 +1388,10 @@ impl Sidebar {
                 });
 
                 for thread in matched_threads {
-                    current_session_ids.insert(thread.metadata.session_id.clone());
+                    if let Some(sid) = thread.metadata.session_id.clone() {
+                        current_session_ids.insert(sid);
+                    }
+                    current_thread_ids.insert(thread.metadata.thread_id);
                     entries.push(thread.into());
                 }
             } else {
@@ -1219,54 +1410,34 @@ impl Sidebar {
                     continue;
                 }
 
-                // Emit DraftThread entries by reading draft IDs from
-                // each workspace's AgentPanel in this group.
                 {
-                    let mut group_draft_ids: Vec<(DraftId, Entity<Workspace>)> = Vec::new();
-                    for ws in group_workspaces {
-                        if let Some(panel) = ws.read(cx).panel::<AgentPanel>(cx) {
-                            let ids = panel.read(cx).draft_ids();
-
-                            for draft_id in ids {
-                                group_draft_ids.push((draft_id, ws.clone()));
+                    // Override titles with editor text for drafts and
+                    // threads that still have the default placeholder
+                    // title (panel considers them drafts even if they
+                    // have a session_id).
+                    for thread in &mut threads {
+                        let needs_title_override =
+                            thread.is_draft || thread.metadata.title.is_none();
+                        if needs_title_override {
+                            if let ThreadEntryWorkspace::Open(workspace) = &thread.workspace {
+                                if let Some(text) =
+                                    self.read_draft_text(thread.metadata.thread_id, workspace, cx)
+                                {
+                                    thread.metadata.title = Some(text);
+                                }
                             }
                         }
                     }
-
-                    // For empty groups with no drafts, emit a
-                    // placeholder DraftThread.
-                    if !has_threads && group_draft_ids.is_empty() {
-                        entries.push(ListEntry::DraftThread {
-                            draft_id: None,
-                            key: group_key.clone(),
-                            workspace: group_workspaces.first().cloned(),
-                            worktrees: Vec::new(),
-                        });
-                    } else {
-                        for (draft_id, ws) in &group_draft_ids {
-                            let ws_worktree_paths = ThreadWorktreePaths::from_project(
-                                ws.read(cx).project().read(cx),
-                                cx,
-                            );
-                            let worktrees = worktree_info_from_thread_paths(&ws_worktree_paths);
-                            entries.push(ListEntry::DraftThread {
-                                draft_id: Some(*draft_id),
-                                key: group_key.clone(),
-                                workspace: Some(ws.clone()),
-                                worktrees,
-                            });
-                        }
-                    }
                 }
 
                 let total = threads.len();
 
-                let extra_batches = self.expanded_groups.get(&group_key).copied().unwrap_or(0);
+                let extra_batches = self.group_extra_batches(&group_key, cx);
                 let threads_to_show =
                     DEFAULT_THREADS_SHOWN + (extra_batches * DEFAULT_THREADS_SHOWN);
                 let count = threads_to_show.min(total);
 
-                let mut promoted_threads: HashSet<acp::SessionId> = HashSet::new();
+                let mut promoted_threads: HashSet<agent_ui::ThreadId> = HashSet::new();
 
                 // Build visible entries in a single pass. Threads within
                 // the cutoff are always shown. Threads beyond it are shown
@@ -1275,23 +1446,27 @@ impl Sidebar {
                 for (index, thread) in threads.into_iter().enumerate() {
                     let is_hidden = index >= count;
 
-                    let session_id = &thread.metadata.session_id;
                     if is_hidden {
+                        let is_notified = notified_threads.contains(&thread.metadata.thread_id);
                         let is_promoted = thread.status == AgentThreadStatus::Running
                             || thread.status == AgentThreadStatus::WaitingForConfirmation
-                            || notified_threads.contains(session_id)
+                            || is_notified
                             || self.active_entry.as_ref().is_some_and(|active| {
                                 active.matches_entry(&ListEntry::Thread(thread.clone()))
                             });
                         if is_promoted {
-                            promoted_threads.insert(session_id.clone());
+                            promoted_threads.insert(thread.metadata.thread_id);
                         }
-                        if !promoted_threads.contains(session_id) {
+                        let is_in_promoted = promoted_threads.contains(&thread.metadata.thread_id);
+                        if !is_in_promoted {
                             continue;
                         }
                     }
 
-                    current_session_ids.insert(session_id.clone());
+                    if let Some(sid) = &thread.metadata.session_id {
+                        current_session_ids.insert(sid.clone());
+                    }
+                    current_thread_ids.insert(thread.metadata.thread_id);
                     entries.push(thread.into());
                 }
 

crates/sidebar/src/sidebar_tests.rs 🔗

@@ -2,8 +2,9 @@ use super::*;
 use acp_thread::{AcpThread, PermissionOptions, StubAgentConnection};
 use agent::ThreadStore;
 use agent_ui::{
+    ThreadId,
     test_support::{active_session_id, open_thread_with_connection, send_message},
-    thread_metadata_store::{ThreadMetadata, ThreadWorktreePaths},
+    thread_metadata_store::{ThreadMetadata, WorktreePaths},
 };
 use chrono::DateTime;
 use fs::{FakeFs, Fs};
@@ -32,20 +33,49 @@ fn init_test(cx: &mut TestAppContext) {
 
 #[track_caller]
 fn assert_active_thread(sidebar: &Sidebar, session_id: &acp::SessionId, msg: &str) {
+    let active = sidebar.active_entry.as_ref();
+    let matches = active.is_some_and(|entry| {
+        // Match by session_id directly on active_entry.
+        entry.session_id.as_ref() == Some(session_id)
+            // Or match by finding the thread in sidebar entries.
+            || sidebar.contents.entries.iter().any(|list_entry| {
+                matches!(list_entry, ListEntry::Thread(t)
+                    if t.metadata.session_id.as_ref() == Some(session_id)
+                        && entry.matches_entry(list_entry))
+            })
+    });
     assert!(
-        sidebar
-            .active_entry
-            .as_ref()
-            .is_some_and(|e| e.is_active_thread(session_id)),
-        "{msg}: expected active_entry to be Thread({session_id:?}), got {:?}",
-        sidebar.active_entry,
+        matches,
+        "{msg}: expected active_entry for session {session_id:?}, got {:?}",
+        active,
     );
 }
 
+#[track_caller]
+fn is_active_session(sidebar: &Sidebar, session_id: &acp::SessionId) -> bool {
+    let thread_id = sidebar
+        .contents
+        .entries
+        .iter()
+        .find_map(|entry| match entry {
+            ListEntry::Thread(t) if t.metadata.session_id.as_ref() == Some(session_id) => {
+                Some(t.metadata.thread_id)
+            }
+            _ => None,
+        });
+    match thread_id {
+        Some(tid) => {
+            matches!(&sidebar.active_entry, Some(ActiveEntry { thread_id, .. }) if *thread_id == tid)
+        }
+        // Thread not in sidebar entries — can't confirm it's active.
+        None => false,
+    }
+}
+
 #[track_caller]
 fn assert_active_draft(sidebar: &Sidebar, workspace: &Entity<Workspace>, msg: &str) {
     assert!(
-        matches!(&sidebar.active_entry, Some(ActiveEntry::Draft { workspace: ws, .. }) if ws == workspace),
+        matches!(&sidebar.active_entry, Some(ActiveEntry { workspace: ws, .. }) if ws == workspace),
         "{msg}: expected active_entry to be Draft for workspace {:?}, got {:?}",
         workspace.entity_id(),
         sidebar.active_entry,
@@ -57,7 +87,7 @@ fn has_thread_entry(sidebar: &Sidebar, session_id: &acp::SessionId) -> bool {
         .contents
         .entries
         .iter()
-        .any(|entry| matches!(entry, ListEntry::Thread(t) if &t.metadata.session_id == session_id))
+        .any(|entry| matches!(entry, ListEntry::Thread(t) if t.metadata.session_id.as_ref() == Some(session_id)))
 }
 
 #[track_caller]
@@ -98,16 +128,22 @@ fn assert_remote_project_integration_sidebar_state(
                     "expected the only sidebar project header to be `project`"
                 );
             }
-            ListEntry::Thread(thread) if &thread.metadata.session_id == main_thread_id => {
+            ListEntry::Thread(thread)
+                if thread.metadata.session_id.as_ref() == Some(main_thread_id) =>
+            {
                 saw_main_thread = true;
             }
-            ListEntry::Thread(thread) if &thread.metadata.session_id == remote_thread_id => {
+            ListEntry::Thread(thread)
+                if thread.metadata.session_id.as_ref() == Some(remote_thread_id) =>
+            {
                 saw_remote_thread = true;
             }
+            ListEntry::Thread(thread) if thread.is_draft => {}
             ListEntry::Thread(thread) => {
-                let title = thread.metadata.title.as_ref();
+                let title = thread.metadata.display_title();
                 panic!(
-                    "unexpected sidebar thread while simulating remote project integration flicker: title=`{title}`"
+                    "unexpected sidebar thread while simulating remote project integration flicker: title=`{}`",
+                    title
                 );
             }
             ListEntry::ViewMore { .. } => {
@@ -115,7 +151,6 @@ fn assert_remote_project_integration_sidebar_state(
                     "unexpected `View More` entry while simulating remote project integration flicker"
                 );
             }
-            ListEntry::DraftThread { .. } => {}
         }
     }
 
@@ -175,7 +210,7 @@ async fn save_n_test_threads(
     for i in 0..count {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(format!("thread-{}", i))),
-            format!("Thread {}", i + 1).into(),
+            Some(format!("Thread {}", i + 1).into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, i).unwrap(),
             None,
             project,
@@ -192,7 +227,7 @@ async fn save_test_thread_metadata(
 ) {
     save_thread_metadata(
         session_id.clone(),
-        "Test".into(),
+        Some("Test".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap(),
         None,
         project,
@@ -208,7 +243,7 @@ async fn save_named_thread_metadata(
 ) {
     save_thread_metadata(
         acp::SessionId::new(Arc::from(session_id)),
-        SharedString::from(title.to_string()),
+        Some(SharedString::from(title.to_string())),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap(),
         None,
         project,
@@ -219,16 +254,23 @@ async fn save_named_thread_metadata(
 
 fn save_thread_metadata(
     session_id: acp::SessionId,
-    title: SharedString,
+    title: Option<SharedString>,
     updated_at: DateTime<Utc>,
     created_at: Option<DateTime<Utc>>,
     project: &Entity<project::Project>,
     cx: &mut TestAppContext,
 ) {
     cx.update(|cx| {
-        let worktree_paths = ThreadWorktreePaths::from_project(project.read(cx), cx);
+        let worktree_paths = project.read(cx).worktree_paths(cx);
+        let thread_id = ThreadMetadataStore::global(cx)
+            .read(cx)
+            .entries()
+            .find(|e| e.session_id.as_ref() == Some(&session_id))
+            .map(|e| e.thread_id)
+            .unwrap_or_else(ThreadId::new);
         let metadata = ThreadMetadata {
-            session_id,
+            thread_id,
+            session_id: Some(session_id),
             agent_id: agent::ZED_AGENT_ID.clone(),
             title,
             updated_at,
@@ -247,19 +289,27 @@ fn save_thread_metadata_with_main_paths(
     title: &str,
     folder_paths: PathList,
     main_worktree_paths: PathList,
+    updated_at: DateTime<Utc>,
     cx: &mut TestAppContext,
 ) {
     let session_id = acp::SessionId::new(Arc::from(session_id));
     let title = SharedString::from(title.to_string());
-    let updated_at = chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap();
+    let thread_id = cx.update(|cx| {
+        ThreadMetadataStore::global(cx)
+            .read(cx)
+            .entries()
+            .find(|e| e.session_id.as_ref() == Some(&session_id))
+            .map(|e| e.thread_id)
+            .unwrap_or_else(ThreadId::new)
+    });
     let metadata = ThreadMetadata {
-        session_id,
+        thread_id,
+        session_id: Some(session_id),
         agent_id: agent::ZED_AGENT_ID.clone(),
-        title,
+        title: Some(title),
         updated_at,
         created_at: None,
-        worktree_paths: ThreadWorktreePaths::from_path_lists(main_worktree_paths, folder_paths)
-            .unwrap(),
+        worktree_paths: WorktreePaths::from_path_lists(main_worktree_paths, folder_paths).unwrap(),
         archived: false,
         remote_connection: None,
     };
@@ -328,7 +378,7 @@ fn visible_entries_as_strings(
     sidebar: &Entity<Sidebar>,
     cx: &mut gpui::VisualTestContext,
 ) -> Vec<String> {
-    sidebar.read_with(cx, |sidebar, _cx| {
+    sidebar.read_with(cx, |sidebar, cx| {
         sidebar
             .contents
             .entries
@@ -347,7 +397,7 @@ fn visible_entries_as_strings(
                         highlight_positions: _,
                         ..
                     } => {
-                        let icon = if sidebar.collapsed_groups.contains(key) {
+                        let icon = if sidebar.is_group_collapsed(key, cx) {
                             ">"
                         } else {
                             "v"
@@ -355,24 +405,34 @@ fn visible_entries_as_strings(
                         format!("{} [{}]{}", icon, label, selected)
                     }
                     ListEntry::Thread(thread) => {
-                        let title = thread.metadata.title.as_ref();
-                        let live = if thread.is_live { " *" } else { "" };
-                        let status_str = match thread.status {
-                            AgentThreadStatus::Running => " (running)",
-                            AgentThreadStatus::Error => " (error)",
-                            AgentThreadStatus::WaitingForConfirmation => " (waiting)",
-                            _ => "",
-                        };
-                        let notified = if sidebar
-                            .contents
-                            .is_thread_notified(&thread.metadata.session_id)
-                        {
-                            " (!)"
-                        } else {
-                            ""
-                        };
+                        let title = thread.metadata.display_title();
                         let worktree = format_linked_worktree_chips(&thread.worktrees);
-                        format!("  {title}{worktree}{live}{status_str}{notified}{selected}")
+
+                        if thread.is_draft {
+                            let is_active = sidebar
+                                .active_entry
+                                .as_ref()
+                                .is_some_and(|e| e.matches_entry(entry));
+                            let active_marker = if is_active { " *" } else { "" };
+                            format!("  [~ Draft{worktree}]{active_marker}{selected}")
+                        } else {
+                            let live = if thread.is_live { " *" } else { "" };
+                            let status_str = match thread.status {
+                                AgentThreadStatus::Running => " (running)",
+                                AgentThreadStatus::Error => " (error)",
+                                AgentThreadStatus::WaitingForConfirmation => " (waiting)",
+                                _ => "",
+                            };
+                            let notified = if sidebar
+                                .contents
+                                .is_thread_notified(&thread.metadata.thread_id)
+                            {
+                                " (!)"
+                            } else {
+                                ""
+                            };
+                            format!("  {title}{worktree}{live}{status_str}{notified}{selected}")
+                        }
                     }
                     ListEntry::ViewMore {
                         is_fully_expanded, ..
@@ -383,15 +443,6 @@ fn visible_entries_as_strings(
                             format!("  + View More{}", selected)
                         }
                     }
-                    ListEntry::DraftThread { worktrees, .. } => {
-                        let worktree = format_linked_worktree_chips(worktrees);
-                        let is_active = sidebar
-                            .active_entry
-                            .as_ref()
-                            .is_some_and(|e| e.matches_entry(entry));
-                        let active_marker = if is_active { " *" } else { "" };
-                        format!("  [~ Draft{}]{}{}", worktree, active_marker, selected)
-                    }
                 }
             })
             .collect()
@@ -413,7 +464,6 @@ async fn test_serialization_round_trip(cx: &mut TestAppContext) {
     sidebar.update_in(cx, |sidebar, window, cx| {
         sidebar.set_width(Some(px(420.0)), cx);
         sidebar.toggle_collapse(&project_group_key, window, cx);
-        sidebar.expanded_groups.insert(project_group_key.clone(), 2);
     });
     cx.run_until_parked();
 
@@ -432,27 +482,11 @@ async fn test_serialization_round_trip(cx: &mut TestAppContext) {
     cx.run_until_parked();
 
     // Assert all serialized fields match.
-    let (width1, collapsed1, expanded1) = sidebar.read_with(cx, |s, _| {
-        (
-            s.width,
-            s.collapsed_groups.clone(),
-            s.expanded_groups.clone(),
-        )
-    });
-    let (width2, collapsed2, expanded2) = sidebar2.read_with(cx, |s, _| {
-        (
-            s.width,
-            s.collapsed_groups.clone(),
-            s.expanded_groups.clone(),
-        )
-    });
+    let width1 = sidebar.read_with(cx, |s, _| s.width);
+    let width2 = sidebar2.read_with(cx, |s, _| s.width);
 
     assert_eq!(width1, width2);
-    assert_eq!(collapsed1, collapsed2);
-    assert_eq!(expanded1, expanded2);
     assert_eq!(width1, px(420.0));
-    assert!(collapsed1.contains(&project_group_key));
-    assert_eq!(expanded1.get(&project_group_key), Some(&2));
 }
 
 #[gpui::test]
@@ -468,8 +502,6 @@ async fn test_restore_serialized_archive_view_does_not_panic(cx: &mut TestAppCon
 
     let serialized = serde_json::to_string(&SerializedSidebar {
         width: Some(400.0),
-        collapsed_groups: Vec::new(),
-        expanded_groups: Vec::new(),
         active_view: SerializedSidebarView::Archive,
     })
     .expect("serialization should succeed");
@@ -551,13 +583,13 @@ async fn test_entities_released_on_window_close(cx: &mut TestAppContext) {
 
 #[gpui::test]
 async fn test_single_workspace_no_threads(cx: &mut TestAppContext) {
-    let project = init_test_project("/my-project", cx).await;
+    let project = init_test_project_with_agent_panel("/my-project", cx).await;
     let (multi_workspace, cx) =
         cx.add_window_view(|window, cx| MultiWorkspace::test_new(project, window, cx));
-    let sidebar = setup_sidebar(&multi_workspace, cx);
+    let (_sidebar, _panel) = setup_sidebar_with_agent_panel(&multi_workspace, cx);
 
     assert_eq!(
-        visible_entries_as_strings(&sidebar, cx),
+        visible_entries_as_strings(&_sidebar, cx),
         vec!["v [my-project]", "  [~ Draft]"]
     );
 }
@@ -571,7 +603,7 @@ async fn test_single_workspace_with_saved_threads(cx: &mut TestAppContext) {
 
     save_thread_metadata(
         acp::SessionId::new(Arc::from("thread-1")),
-        "Fix crash in project panel".into(),
+        Some("Fix crash in project panel".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 3, 0, 0, 0).unwrap(),
         None,
         &project,
@@ -580,7 +612,7 @@ async fn test_single_workspace_with_saved_threads(cx: &mut TestAppContext) {
 
     save_thread_metadata(
         acp::SessionId::new(Arc::from("thread-2")),
-        "Add inline diff view".into(),
+        Some("Add inline diff view".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 2, 0, 0, 0).unwrap(),
         None,
         &project,
@@ -612,7 +644,7 @@ async fn test_workspace_lifecycle(cx: &mut TestAppContext) {
     // Single workspace with a thread
     save_thread_metadata(
         acp::SessionId::new(Arc::from("thread-a1")),
-        "Thread A1".into(),
+        Some("Thread A1".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap(),
         None,
         &project,
@@ -710,14 +742,7 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
 
     // Expand again by one batch
     sidebar.update_in(cx, |s, _window, cx| {
-        let current = s
-            .expanded_groups
-            .get(&project_group_key)
-            .copied()
-            .unwrap_or(0);
-        s.expanded_groups
-            .insert(project_group_key.clone(), current + 1);
-        s.update_entries(cx);
+        s.expand_thread_group(&project_group_key, cx);
     });
     cx.run_until_parked();
 
@@ -728,14 +753,7 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
 
     // Expand one more time - should show all 17 threads with Collapse button
     sidebar.update_in(cx, |s, _window, cx| {
-        let current = s
-            .expanded_groups
-            .get(&project_group_key)
-            .copied()
-            .unwrap_or(0);
-        s.expanded_groups
-            .insert(project_group_key.clone(), current + 1);
-        s.update_entries(cx);
+        s.expand_thread_group(&project_group_key, cx);
     });
     cx.run_until_parked();
 
@@ -747,8 +765,7 @@ async fn test_view_more_batched_expansion(cx: &mut TestAppContext) {
 
     // Click collapse - should go back to showing 5 threads
     sidebar.update_in(cx, |s, _window, cx| {
-        s.expanded_groups.remove(&project_group_key);
-        s.update_entries(cx);
+        s.reset_thread_group_expansion(&project_group_key, cx);
     });
     cx.run_until_parked();
 
@@ -811,8 +828,152 @@ async fn test_collapse_and_expand_group(cx: &mut TestAppContext) {
     );
 }
 
+#[gpui::test]
+async fn test_collapse_state_survives_worktree_key_change(cx: &mut TestAppContext) {
+    // When a worktree is added to a project, the project group key changes.
+    // The sidebar's collapsed/expanded state is keyed by ProjectGroupKey, so
+    // UI state must survive the key change.
+    let (_fs, project) = init_multi_project_test(&["/project-a", "/project-b"], cx).await;
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+    let sidebar = setup_sidebar(&multi_workspace, cx);
+
+    save_n_test_threads(2, &project, cx).await;
+    sidebar.update_in(cx, |sidebar, _window, cx| sidebar.update_entries(cx));
+    cx.run_until_parked();
+
+    assert_eq!(
+        visible_entries_as_strings(&sidebar, cx),
+        vec!["v [project-a]", "  Thread 2", "  Thread 1",]
+    );
+
+    // Collapse the group.
+    let old_key = project.read_with(cx, |project, cx| project.project_group_key(cx));
+    sidebar.update_in(cx, |sidebar, window, cx| {
+        sidebar.toggle_collapse(&old_key, window, cx);
+    });
+    cx.run_until_parked();
+
+    assert_eq!(
+        visible_entries_as_strings(&sidebar, cx),
+        vec!["> [project-a]"]
+    );
+
+    // Add a second worktree — the key changes from [/project-a] to
+    // [/project-a, /project-b].
+    project
+        .update(cx, |project, cx| {
+            project.find_or_create_worktree("/project-b", true, cx)
+        })
+        .await
+        .expect("should add worktree");
+    cx.run_until_parked();
+
+    sidebar.update_in(cx, |sidebar, _window, cx| sidebar.update_entries(cx));
+    cx.run_until_parked();
+
+    // The group should still be collapsed under the new key.
+    assert_eq!(
+        visible_entries_as_strings(&sidebar, cx),
+        vec!["> [project-a, project-b]"]
+    );
+}
+
+#[gpui::test]
+async fn test_adding_folder_to_non_backed_group_migrates_threads(cx: &mut TestAppContext) {
+    use workspace::ProjectGroup;
+    // When a project group has no backing workspace (e.g. the workspace was
+    // closed but the group and its threads remain), adding a folder via
+    // `add_folders_to_project_group` should still migrate thread metadata
+    // to the new key and cause the sidebar to rerender.
+    let (_fs, project) =
+        init_multi_project_test(&["/active-project", "/orphan-a", "/orphan-b"], cx).await;
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+    let sidebar = setup_sidebar(&multi_workspace, cx);
+
+    // Insert a standalone project group for [/orphan-a] with no backing
+    // workspace — simulating a group that persisted after its workspace
+    // was closed.
+    let group_key = ProjectGroupKey::new(None, PathList::new(&[PathBuf::from("/orphan-a")]));
+    multi_workspace.update(cx, |mw, _cx| {
+        mw.test_add_project_group(ProjectGroup {
+            key: group_key.clone(),
+            workspaces: Vec::new(),
+            expanded: true,
+            visible_thread_count: None,
+        });
+    });
+
+    // Verify the group has no backing workspaces.
+    multi_workspace.read_with(cx, |mw, cx| {
+        let group = mw
+            .project_groups(cx)
+            .into_iter()
+            .find(|g| g.key == group_key)
+            .expect("group should exist");
+        assert!(
+            group.workspaces.is_empty(),
+            "group should have no backing workspaces"
+        );
+    });
+
+    // Save threads directly into the metadata store under [/orphan-a].
+    save_thread_metadata_with_main_paths(
+        "t-1",
+        "Thread One",
+        PathList::new(&[PathBuf::from("/orphan-a")]),
+        PathList::new(&[PathBuf::from("/orphan-a")]),
+        chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap(),
+        cx,
+    );
+    save_thread_metadata_with_main_paths(
+        "t-2",
+        "Thread Two",
+        PathList::new(&[PathBuf::from("/orphan-a")]),
+        PathList::new(&[PathBuf::from("/orphan-a")]),
+        chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 1).unwrap(),
+        cx,
+    );
+    sidebar.update_in(cx, |sidebar, _window, cx| sidebar.update_entries(cx));
+    cx.run_until_parked();
+
+    // Verify threads show under the standalone group.
+    assert_eq!(
+        visible_entries_as_strings(&sidebar, cx),
+        vec![
+            "v [active-project]",
+            "v [orphan-a]",
+            "  Thread Two",
+            "  Thread One",
+        ]
+    );
+
+    // Add /orphan-b to the non-backed group.
+    multi_workspace.update(cx, |mw, cx| {
+        mw.add_folders_to_project_group(&group_key, vec![PathBuf::from("/orphan-b")], cx);
+    });
+    cx.run_until_parked();
+
+    sidebar.update_in(cx, |sidebar, _window, cx| sidebar.update_entries(cx));
+    cx.run_until_parked();
+
+    // Threads should now appear under the combined key.
+    assert_eq!(
+        visible_entries_as_strings(&sidebar, cx),
+        vec![
+            "v [active-project]",
+            "v [orphan-a, orphan-b]",
+            "  Thread Two",
+            "  Thread One",
+        ]
+    );
+}
+
 #[gpui::test]
 async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
+    use workspace::ProjectGroup;
+
     let project = init_test_project("/my-project", cx).await;
     let (multi_workspace, cx) =
         cx.add_window_view(|window, cx| MultiWorkspace::test_new(project, window, cx));
@@ -822,16 +983,23 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
     let expanded_path = PathList::new(&[std::path::PathBuf::from("/expanded")]);
     let collapsed_path = PathList::new(&[std::path::PathBuf::from("/collapsed")]);
 
+    // Set the collapsed group state through multi_workspace
+    multi_workspace.update(cx, |mw, _cx| {
+        mw.test_add_project_group(ProjectGroup {
+            key: ProjectGroupKey::new(None, collapsed_path.clone()),
+            workspaces: Vec::new(),
+            expanded: false,
+            visible_thread_count: None,
+        });
+    });
+
     sidebar.update_in(cx, |s, _window, _cx| {
-        s.collapsed_groups
-            .insert(project::ProjectGroupKey::new(None, collapsed_path.clone()));
-        s.contents
-            .notified_threads
-            .insert(acp::SessionId::new(Arc::from("t-5")));
+        let notified_thread_id = ThreadId::new();
+        s.contents.notified_threads.insert(notified_thread_id);
         s.contents.entries = vec![
             // Expanded project header
             ListEntry::ProjectHeader {
-                key: project::ProjectGroupKey::new(None, expanded_path.clone()),
+                key: ProjectGroupKey::new(None, expanded_path.clone()),
                 label: "expanded-project".into(),
                 highlight_positions: Vec::new(),
                 has_running_threads: false,
@@ -841,10 +1009,11 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
             },
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
-                    session_id: acp::SessionId::new(Arc::from("t-1")),
+                    thread_id: ThreadId::new(),
+                    session_id: Some(acp::SessionId::new(Arc::from("t-1"))),
                     agent_id: AgentId::new("zed-agent"),
-                    worktree_paths: ThreadWorktreePaths::default(),
-                    title: "Completed thread".into(),
+                    worktree_paths: WorktreePaths::default(),
+                    title: Some("Completed thread".into()),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
@@ -857,6 +1026,7 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 is_live: false,
                 is_background: false,
                 is_title_generating: false,
+                is_draft: false,
                 highlight_positions: Vec::new(),
                 worktrees: Vec::new(),
                 diff_stats: DiffStats::default(),
@@ -864,10 +1034,11 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
             // Active thread with Running status
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
-                    session_id: acp::SessionId::new(Arc::from("t-2")),
+                    thread_id: ThreadId::new(),
+                    session_id: Some(acp::SessionId::new(Arc::from("t-2"))),
                     agent_id: AgentId::new("zed-agent"),
-                    worktree_paths: ThreadWorktreePaths::default(),
-                    title: "Running thread".into(),
+                    worktree_paths: WorktreePaths::default(),
+                    title: Some("Running thread".into()),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
@@ -880,6 +1051,7 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 is_live: true,
                 is_background: false,
                 is_title_generating: false,
+                is_draft: false,
                 highlight_positions: Vec::new(),
                 worktrees: Vec::new(),
                 diff_stats: DiffStats::default(),
@@ -887,10 +1059,11 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
             // Active thread with Error status
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
-                    session_id: acp::SessionId::new(Arc::from("t-3")),
+                    thread_id: ThreadId::new(),
+                    session_id: Some(acp::SessionId::new(Arc::from("t-3"))),
                     agent_id: AgentId::new("zed-agent"),
-                    worktree_paths: ThreadWorktreePaths::default(),
-                    title: "Error thread".into(),
+                    worktree_paths: WorktreePaths::default(),
+                    title: Some("Error thread".into()),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
@@ -903,6 +1076,7 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 is_live: true,
                 is_background: false,
                 is_title_generating: false,
+                is_draft: false,
                 highlight_positions: Vec::new(),
                 worktrees: Vec::new(),
                 diff_stats: DiffStats::default(),
@@ -911,10 +1085,11 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
             // remote_connection: None,
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
-                    session_id: acp::SessionId::new(Arc::from("t-4")),
+                    thread_id: ThreadId::new(),
+                    session_id: Some(acp::SessionId::new(Arc::from("t-4"))),
                     agent_id: AgentId::new("zed-agent"),
-                    worktree_paths: ThreadWorktreePaths::default(),
-                    title: "Waiting thread".into(),
+                    worktree_paths: WorktreePaths::default(),
+                    title: Some("Waiting thread".into()),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
@@ -927,6 +1102,7 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 is_live: false,
                 is_background: false,
                 is_title_generating: false,
+                is_draft: false,
                 highlight_positions: Vec::new(),
                 worktrees: Vec::new(),
                 diff_stats: DiffStats::default(),
@@ -935,10 +1111,11 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
             // remote_connection: None,
             ListEntry::Thread(ThreadEntry {
                 metadata: ThreadMetadata {
-                    session_id: acp::SessionId::new(Arc::from("t-5")),
+                    thread_id: notified_thread_id,
+                    session_id: Some(acp::SessionId::new(Arc::from("t-5"))),
                     agent_id: AgentId::new("zed-agent"),
-                    worktree_paths: ThreadWorktreePaths::default(),
-                    title: "Notified thread".into(),
+                    worktree_paths: WorktreePaths::default(),
+                    title: Some("Notified thread".into()),
                     updated_at: Utc::now(),
                     created_at: Some(Utc::now()),
                     archived: false,
@@ -951,18 +1128,19 @@ async fn test_visible_entries_as_strings(cx: &mut TestAppContext) {
                 is_live: true,
                 is_background: true,
                 is_title_generating: false,
+                is_draft: false,
                 highlight_positions: Vec::new(),
                 worktrees: Vec::new(),
                 diff_stats: DiffStats::default(),
             }),
             // View More entry
             ListEntry::ViewMore {
-                key: project::ProjectGroupKey::new(None, expanded_path.clone()),
+                key: ProjectGroupKey::new(None, expanded_path.clone()),
                 is_fully_expanded: false,
             },
             // Collapsed project header
             ListEntry::ProjectHeader {
-                key: project::ProjectGroupKey::new(None, collapsed_path.clone()),
+                key: ProjectGroupKey::new(None, collapsed_path.clone()),
                 label: "collapsed-project".into(),
                 highlight_positions: Vec::new(),
                 has_running_threads: false,
@@ -1313,10 +1491,10 @@ async fn test_keyboard_collapse_from_child_selects_parent(cx: &mut TestAppContex
 
 #[gpui::test]
 async fn test_keyboard_navigation_on_empty_list(cx: &mut TestAppContext) {
-    let project = init_test_project("/empty-project", cx).await;
+    let project = init_test_project_with_agent_panel("/empty-project", cx).await;
     let (multi_workspace, cx) =
         cx.add_window_view(|window, cx| MultiWorkspace::test_new(project, window, cx));
-    let sidebar = setup_sidebar(&multi_workspace, cx);
+    let (sidebar, _panel) = setup_sidebar_with_agent_panel(&multi_workspace, cx);
 
     // An empty project has the header and an auto-created draft.
     assert_eq!(
@@ -1383,6 +1561,7 @@ async fn init_test_project_with_agent_panel(
 ) -> Entity<project::Project> {
     agent_ui::test_support::init_test(cx);
     cx.update(|cx| {
+        cx.set_global(agent_ui::MaxIdleRetainedThreads(1));
         ThreadStore::init_global(cx);
         ThreadMetadataStore::init_global(cx);
         language_model::LanguageModelRegistry::test(cx);
@@ -1497,7 +1676,7 @@ async fn test_subagent_permission_request_marks_parent_sidebar_thread_waiting(
     let subagent_thread = panel.read_with(cx, |panel, cx| {
         panel
             .active_conversation_view()
-            .and_then(|conversation| conversation.read(cx).thread_view(&subagent_session_id))
+            .and_then(|conversation| conversation.read(cx).thread_view(&subagent_session_id, cx))
             .map(|thread_view| thread_view.read(cx).thread.clone())
             .expect("Expected subagent thread to be loaded into the conversation")
     });
@@ -1509,7 +1688,9 @@ async fn test_subagent_permission_request_marks_parent_sidebar_thread_waiting(
             .entries
             .iter()
             .find_map(|entry| match entry {
-                ListEntry::Thread(thread) if thread.metadata.session_id == parent_session_id => {
+                ListEntry::Thread(thread)
+                    if thread.metadata.session_id.as_ref() == Some(&parent_session_id) =>
+                {
                     Some(thread.status)
                 }
                 _ => None,
@@ -1601,7 +1782,7 @@ async fn test_search_narrows_visible_threads_to_matches(cx: &mut TestAppContext)
     ] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project,
@@ -1652,7 +1833,7 @@ async fn test_search_matches_regardless_of_case(cx: &mut TestAppContext) {
 
     save_thread_metadata(
         acp::SessionId::new(Arc::from("thread-1")),
-        "Fix Crash In Project Panel".into(),
+        Some("Fix Crash In Project Panel".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap(),
         None,
         &project,
@@ -1695,7 +1876,7 @@ async fn test_escape_clears_search_and_restores_full_list(cx: &mut TestAppContex
     for (id, title, hour) in [("t-1", "Alpha thread", 2), ("t-2", "Beta thread", 1)] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project,
@@ -1755,7 +1936,7 @@ async fn test_search_only_shows_workspace_headers_with_matches(cx: &mut TestAppC
     ] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project_a,
@@ -1779,7 +1960,7 @@ async fn test_search_only_shows_workspace_headers_with_matches(cx: &mut TestAppC
     ] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project_b,
@@ -1843,7 +2024,7 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     ] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project_a,
@@ -1867,7 +2048,7 @@ async fn test_search_matches_workspace_name(cx: &mut TestAppContext) {
     ] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project_b,
@@ -1960,7 +2141,7 @@ async fn test_search_finds_threads_hidden_behind_view_more(cx: &mut TestAppConte
         };
         save_thread_metadata(
             acp::SessionId::new(Arc::from(format!("thread-{}", i))),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, i).unwrap(),
             None,
             &project,
@@ -2006,7 +2187,7 @@ async fn test_search_finds_threads_inside_collapsed_groups(cx: &mut TestAppConte
 
     save_thread_metadata(
         acp::SessionId::new(Arc::from("thread-1")),
-        "Important thread".into(),
+        Some("Important thread".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, 0, 0, 0).unwrap(),
         None,
         &project,
@@ -2057,7 +2238,7 @@ async fn test_search_then_keyboard_navigate_and_confirm(cx: &mut TestAppContext)
     ] {
         save_thread_metadata(
             acp::SessionId::new(Arc::from(id)),
-            title.into(),
+            Some(title.into()),
             chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 1, 1, hour, 0, 0).unwrap(),
             None,
             &project,
@@ -2127,7 +2308,7 @@ async fn test_confirm_on_historical_thread_activates_workspace(cx: &mut TestAppC
 
     save_thread_metadata(
         acp::SessionId::new(Arc::from("hist-1")),
-        "Historical Thread".into(),
+        Some("Historical Thread".into()),
         chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 6, 1, 0, 0, 0).unwrap(),
         None,
         &project,
@@ -2173,6 +2354,148 @@ async fn test_confirm_on_historical_thread_activates_workspace(cx: &mut TestAppC
     );
 }
 
+#[gpui::test]
+async fn test_confirm_on_historical_thread_in_new_project_group_opens_real_thread(
+    cx: &mut TestAppContext,
+) {
+    use workspace::ProjectGroup;
+
+    agent_ui::test_support::init_test(cx);
+    cx.update(|cx| {
+        cx.set_global(agent_ui::MaxIdleRetainedThreads(1));
+        ThreadStore::init_global(cx);
+        ThreadMetadataStore::init_global(cx);
+        language_model::LanguageModelRegistry::test(cx);
+        prompt_store::init(cx);
+    });
+
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/project-a", serde_json::json!({ "src": {} }))
+        .await;
+    fs.insert_tree("/project-b", serde_json::json!({ "src": {} }))
+        .await;
+    cx.update(|cx| <dyn fs::Fs>::set_global(fs.clone(), cx));
+
+    let project_a = project::Project::test(fs.clone(), ["/project-a".as_ref()], cx).await;
+    let project_b = project::Project::test(fs.clone(), ["/project-b".as_ref()], cx).await;
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project_a.clone(), window, cx));
+    let sidebar = setup_sidebar(&multi_workspace, cx);
+
+    let project_b_key = project_b.read_with(cx, |project, cx| project.project_group_key(cx));
+    multi_workspace.update(cx, |mw, _cx| {
+        mw.test_add_project_group(ProjectGroup {
+            key: project_b_key.clone(),
+            workspaces: Vec::new(),
+            expanded: true,
+            visible_thread_count: None,
+        });
+    });
+
+    let session_id = acp::SessionId::new(Arc::from("historical-new-project-group"));
+    save_thread_metadata(
+        session_id.clone(),
+        Some("Historical Thread in New Group".into()),
+        chrono::TimeZone::with_ymd_and_hms(&Utc, 2024, 6, 1, 0, 0, 0).unwrap(),
+        None,
+        &project_b,
+        cx,
+    );
+    cx.run_until_parked();
+
+    multi_workspace.update_in(cx, |_, _window, cx| cx.notify());
+    cx.run_until_parked();
+
+    let entries_before = visible_entries_as_strings(&sidebar, cx);
+    assert_eq!(
+        entries_before,
+        vec![
+            "v [project-a]",
+            "v [project-b]",
+            "  Historical Thread in New Group",
+        ],
+        "expected the closed project group to show the historical thread before first open"
+    );
+
+    assert_eq!(
+        multi_workspace.read_with(cx, |mw, _| mw.workspaces().count()),
+        1,
+        "should start without an open workspace for the new project group"
+    );
+
+    sidebar.update_in(cx, |sidebar, window, cx| {
+        sidebar.selection = Some(2);
+        sidebar.confirm(&Confirm, window, cx);
+    });
+    cx.run_until_parked();
+    cx.run_until_parked();
+    cx.run_until_parked();
+
+    assert_eq!(
+        multi_workspace.read_with(cx, |mw, _| mw.workspaces().count()),
+        2,
+        "confirming the historical thread should open a workspace for the new project group"
+    );
+
+    let workspace_b = multi_workspace.read_with(cx, |mw, cx| {
+        mw.workspaces()
+            .find(|workspace| {
+                PathList::new(&workspace.read(cx).root_paths(cx))
+                    == project_b_key.path_list().clone()
+            })
+            .cloned()
+            .expect("expected workspace for project-b after opening the historical thread")
+    });
+
+    assert_eq!(
+        multi_workspace.read_with(cx, |mw, _| mw.workspace().clone()),
+        workspace_b,
+        "opening the historical thread should activate the new project's workspace"
+    );
+
+    let panel = workspace_b.read_with(cx, |workspace, cx| {
+        workspace
+            .panel::<AgentPanel>(cx)
+            .expect("expected first-open activation to bootstrap the agent panel")
+    });
+
+    let expected_thread_id = cx.update(|_, cx| {
+        ThreadMetadataStore::global(cx)
+            .read(cx)
+            .entries()
+            .find(|e| e.session_id.as_ref() == Some(&session_id))
+            .map(|e| e.thread_id)
+            .expect("metadata should still map session id to thread id")
+    });
+
+    assert_eq!(
+        panel.read_with(cx, |panel, cx| panel.active_thread_id(cx)),
+        Some(expected_thread_id),
+        "expected the agent panel to activate the real historical thread rather than a draft"
+    );
+
+    let entries_after = visible_entries_as_strings(&sidebar, cx);
+    let matching_rows: Vec<_> = entries_after
+        .iter()
+        .filter(|entry| entry.contains("Historical Thread in New Group") || entry.contains("Draft"))
+        .cloned()
+        .collect();
+    assert_eq!(
+        matching_rows.len(),
+        1,
+        "expected only one matching row after first open into a new project group, got entries: {entries_after:?}"
+    );
+    assert!(
+        matching_rows[0].contains("Historical Thread in New Group"),
+        "expected the surviving row to be the real historical thread, got entries: {entries_after:?}"
+    );
+    assert!(
+        !matching_rows[0].contains("Draft"),
+        "expected no draft row after first open into a new project group, got entries: {entries_after:?}"
+    );
+}
+
 #[gpui::test]
 async fn test_click_clears_selection_and_focus_in_restores_it(cx: &mut TestAppContext) {
     let project = init_test_project("/my-project", cx).await;

crates/title_bar/src/title_bar.rs 🔗

@@ -715,7 +715,7 @@ impl TitleBar {
             .multi_workspace
             .as_ref()
             .and_then(|mw| mw.upgrade())
-            .map(|mw| mw.read(cx).project_group_keys().cloned().collect())
+            .map(|mw| mw.read(cx).project_group_keys())
             .unwrap_or_default();
 
         PopoverMenu::new("recent-projects-menu")
@@ -772,7 +772,7 @@ impl TitleBar {
             .multi_workspace
             .as_ref()
             .and_then(|mw| mw.upgrade())
-            .map(|mw| mw.read(cx).project_group_keys().cloned().collect())
+            .map(|mw| mw.read(cx).project_group_keys())
             .unwrap_or_default();
 
         PopoverMenu::new("sidebar-title-recent-projects-menu")

crates/workspace/src/multi_workspace.rs 🔗

@@ -5,11 +5,11 @@ use gpui::{
     ManagedView, MouseButton, Pixels, Render, Subscription, Task, Tiling, Window, WindowId,
     actions, deferred, px,
 };
-use project::{DirectoryLister, DisableAiSettings, Project, ProjectGroupKey};
+pub use project::ProjectGroupKey;
+use project::{DirectoryLister, DisableAiSettings, Project};
 use remote::RemoteConnectionOptions;
 use settings::Settings;
 pub use settings::SidebarSide;
-use std::collections::{HashMap, HashSet};
 use std::future::Future;
 use std::path::Path;
 use std::path::PathBuf;
@@ -101,13 +101,9 @@ pub enum MultiWorkspaceEvent {
     ActiveWorkspaceChanged,
     WorkspaceAdded(Entity<Workspace>),
     WorkspaceRemoved(EntityId),
-    WorktreePathAdded {
-        old_main_paths: PathList,
-        added_path: PathBuf,
-    },
-    WorktreePathRemoved {
-        old_main_paths: PathList,
-        removed_path: PathBuf,
+    ProjectGroupKeyUpdated {
+        old_key: ProjectGroupKey,
+        new_key: ProjectGroupKey,
     },
 }
 
@@ -265,52 +261,32 @@ impl<T: Sidebar> SidebarHandle for Entity<T> {
     }
 }
 
-/// Tracks which workspace the user is currently looking at.
-///
-/// `Persistent` workspaces live in the `workspaces` vec and are shown in the
-/// sidebar. `Transient` workspaces exist outside the vec and are discarded
-/// when the user switches away.
-enum ActiveWorkspace {
-    /// A persistent workspace, identified by index into the `workspaces` vec.
-    Persistent(usize),
-    /// A workspace not in the `workspaces` vec that will be discarded on
-    /// switch or promoted to persistent when the sidebar is opened.
-    Transient(Entity<Workspace>),
+#[derive(Clone)]
+pub struct ProjectGroup {
+    pub key: ProjectGroupKey,
+    pub workspaces: Vec<Entity<Workspace>>,
+    pub expanded: bool,
+    pub visible_thread_count: Option<usize>,
 }
 
-impl ActiveWorkspace {
-    fn transient_workspace(&self) -> Option<&Entity<Workspace>> {
-        match self {
-            Self::Transient(workspace) => Some(workspace),
-            Self::Persistent(_) => None,
-        }
-    }
-
-    /// Sets the active workspace to transient, returning the previous
-    /// transient workspace (if any).
-    fn set_transient(&mut self, workspace: Entity<Workspace>) -> Option<Entity<Workspace>> {
-        match std::mem::replace(self, Self::Transient(workspace)) {
-            Self::Transient(old) => Some(old),
-            Self::Persistent(_) => None,
-        }
-    }
+pub struct SerializedProjectGroupState {
+    pub key: ProjectGroupKey,
+    pub expanded: bool,
+    pub visible_thread_count: Option<usize>,
+}
 
-    /// Sets the active workspace to persistent at the given index,
-    /// returning the previous transient workspace (if any).
-    fn set_persistent(&mut self, index: usize) -> Option<Entity<Workspace>> {
-        match std::mem::replace(self, Self::Persistent(index)) {
-            Self::Transient(workspace) => Some(workspace),
-            Self::Persistent(_) => None,
-        }
-    }
+#[derive(Clone)]
+pub struct ProjectGroupState {
+    pub key: ProjectGroupKey,
+    pub expanded: bool,
+    pub visible_thread_count: Option<usize>,
 }
 
 pub struct MultiWorkspace {
     window_id: WindowId,
-    workspaces: Vec<Entity<Workspace>>,
-    active_workspace: ActiveWorkspace,
-    project_group_keys: Vec<ProjectGroupKey>,
-    workspace_group_keys: HashMap<EntityId, ProjectGroupKey>,
+    retained_workspaces: Vec<Entity<Workspace>>,
+    project_groups: Vec<ProjectGroupState>,
+    active_workspace: Entity<Workspace>,
     sidebar: Option<Box<dyn SidebarHandle>>,
     sidebar_open: bool,
     sidebar_overlay: Option<AnyView>,
@@ -362,10 +338,9 @@ impl MultiWorkspace {
         });
         Self {
             window_id: window.window_handle().window_id(),
-            project_group_keys: Vec::new(),
-            workspace_group_keys: HashMap::default(),
-            workspaces: Vec::new(),
-            active_workspace: ActiveWorkspace::Transient(workspace),
+            retained_workspaces: Vec::new(),
+            project_groups: Vec::new(),
+            active_workspace: workspace,
             sidebar: None,
             sidebar_open: false,
             sidebar_overlay: None,
@@ -482,13 +457,9 @@ impl MultiWorkspace {
 
     pub fn open_sidebar(&mut self, cx: &mut Context<Self>) {
         self.sidebar_open = true;
-        if let ActiveWorkspace::Transient(workspace) = &self.active_workspace {
-            let workspace = workspace.clone();
-            let index = self.promote_transient(workspace, cx);
-            self.active_workspace = ActiveWorkspace::Persistent(index);
-        }
+        self.retain_active_workspace(cx);
         let sidebar_focus_handle = self.sidebar.as_ref().map(|s| s.focus_handle(cx));
-        for workspace in self.workspaces.iter() {
+        for workspace in self.retained_workspaces.clone() {
             workspace.update(cx, |workspace, _cx| {
                 workspace.set_sidebar_focus_handle(sidebar_focus_handle.clone());
             });
@@ -499,7 +470,7 @@ impl MultiWorkspace {
 
     pub fn close_sidebar(&mut self, window: &mut Window, cx: &mut Context<Self>) {
         self.sidebar_open = false;
-        for workspace in self.workspaces.iter() {
+        for workspace in self.retained_workspaces.clone() {
             workspace.update(cx, |workspace, _cx| {
                 workspace.set_sidebar_focus_handle(None);
             });
@@ -567,11 +538,16 @@ impl MultiWorkspace {
         cx.subscribe_in(&project, window, {
             let workspace = workspace.downgrade();
             move |this, _project, event, _window, cx| match event {
-                project::Event::WorktreeAdded(_)
-                | project::Event::WorktreeRemoved(_)
-                | project::Event::WorktreeUpdatedRootRepoCommonDir(_) => {
+                project::Event::WorktreePathsChanged { old_worktree_paths } => {
                     if let Some(workspace) = workspace.upgrade() {
-                        this.handle_workspace_key_change(&workspace, cx);
+                        let host = workspace
+                            .read(cx)
+                            .project()
+                            .read(cx)
+                            .remote_connection_options(cx);
+                        let old_key =
+                            ProjectGroupKey::from_worktree_paths(old_worktree_paths, host);
+                        this.handle_project_group_key_change(&workspace, &old_key, cx);
                     }
                 }
                 _ => {}
@@ -587,142 +563,157 @@ impl MultiWorkspace {
         .detach();
     }
 
-    fn handle_workspace_key_change(
+    fn handle_project_group_key_change(
         &mut self,
         workspace: &Entity<Workspace>,
+        old_key: &ProjectGroupKey,
         cx: &mut Context<Self>,
     ) {
-        let workspace_id = workspace.entity_id();
-        let old_key = self.project_group_key_for_workspace(workspace, cx);
-        let new_key = workspace.read(cx).project_group_key(cx);
+        if !self.is_workspace_retained(workspace) {
+            return;
+        }
 
-        if new_key.path_list().paths().is_empty() || old_key == new_key {
+        let new_key = workspace.read(cx).project_group_key(cx);
+        if new_key.path_list().paths().is_empty() {
             return;
         }
 
-        let active_workspace = self.workspace().clone();
+        // Re-key the group without emitting ProjectGroupKeyUpdated —
+        // the Project already emitted WorktreePathsChanged which the
+        // sidebar handles for thread migration.
+        self.rekey_project_group(old_key, &new_key, cx);
+        self.serialize(cx);
+        cx.notify();
+    }
 
-        self.set_workspace_group_key(workspace, new_key.clone());
+    pub fn is_workspace_retained(&self, workspace: &Entity<Workspace>) -> bool {
+        self.retained_workspaces
+            .iter()
+            .any(|retained| retained == workspace)
+    }
 
-        let changed_root_paths = workspace.read(cx).root_paths(cx);
-        let old_paths = old_key.path_list().paths();
-        let new_paths = new_key.path_list().paths();
+    pub fn active_workspace_is_retained(&self) -> bool {
+        self.is_workspace_retained(&self.active_workspace)
+    }
 
-        // Remove workspaces that already had the new key and have the same
-        // root paths (true duplicates that this workspace is replacing).
-        //
-        // NOTE: These are dropped without prompting for unsaved changes because
-        // the user explicitly added a folder that makes this workspace
-        // identical to the duplicate — they are intentionally overwriting it.
-        let duplicate_workspaces: Vec<Entity<Workspace>> = self
-            .workspaces
-            .iter()
-            .filter(|ws| {
-                ws.entity_id() != workspace_id
-                    && self.project_group_key_for_workspace(ws, cx) == new_key
-                    && ws.read(cx).root_paths(cx) == changed_root_paths
-            })
-            .cloned()
-            .collect();
+    pub fn retained_workspaces(&self) -> &[Entity<Workspace>] {
+        &self.retained_workspaces
+    }
 
-        if duplicate_workspaces.contains(&active_workspace) {
-            // The active workspace is among the duplicates — drop the
-            // incoming workspace instead so the user stays where they are.
-            self.detach_workspace(workspace, cx);
-            self.workspaces.retain(|w| w != workspace);
-        } else {
-            for ws in &duplicate_workspaces {
-                self.detach_workspace(ws, cx);
-                self.workspaces.retain(|w| w != ws);
-            }
+    /// Ensures a project group exists for `key`, creating one if needed.
+    fn ensure_project_group_state(&mut self, key: ProjectGroupKey) {
+        if key.path_list().paths().is_empty() {
+            return;
         }
 
-        // Propagate folder adds/removes to linked worktree siblings
-        // (different root paths, same old key) so they stay in the group.
-        let group_workspaces: Vec<Entity<Workspace>> = self
-            .workspaces
-            .iter()
-            .filter(|ws| {
-                ws.entity_id() != workspace_id
-                    && self.project_group_key_for_workspace(ws, cx) == old_key
-            })
-            .cloned()
-            .collect();
+        if self.project_groups.iter().any(|group| group.key == key) {
+            return;
+        }
 
-        for workspace in &group_workspaces {
-            // Pre-set this to stop later WorktreeAdded events from triggering
-            self.set_workspace_group_key(&workspace, new_key.clone());
+        self.project_groups.insert(
+            0,
+            ProjectGroupState {
+                key,
+                expanded: true,
+                visible_thread_count: None,
+            },
+        );
+    }
 
-            let project = workspace.read(cx).project().clone();
+    /// Transitions a project group from `old_key` to `new_key`.
+    ///
+    /// On collision (both keys have groups), the active workspace's
+    /// Re-keys a project group from `old_key` to `new_key`, handling
+    /// collisions. When two groups collide, the active workspace's
+    /// group always wins. Otherwise the old key's state is preserved
+    /// — it represents the group the user or system just acted on.
+    /// The losing group is removed, and the winner is re-keyed in
+    /// place to preserve sidebar order.
+    fn rekey_project_group(
+        &mut self,
+        old_key: &ProjectGroupKey,
+        new_key: &ProjectGroupKey,
+        cx: &App,
+    ) {
+        if old_key == new_key {
+            return;
+        }
 
-            for added_path in new_paths.iter().filter(|p| !old_paths.contains(p)) {
-                project
-                    .update(cx, |project, cx| {
-                        project.find_or_create_worktree(added_path, true, cx)
-                    })
-                    .detach_and_log_err(cx);
-            }
+        if new_key.path_list().paths().is_empty() {
+            return;
+        }
 
-            for removed_path in old_paths.iter().filter(|p| !new_paths.contains(p)) {
-                project.update(cx, |project, cx| {
-                    project.remove_worktree_for_main_worktree_path(removed_path, cx);
-                });
-            }
+        let old_key_exists = self.project_groups.iter().any(|g| g.key == *old_key);
+        let new_key_exists = self.project_groups.iter().any(|g| g.key == *new_key);
+
+        if !old_key_exists {
+            self.ensure_project_group_state(new_key.clone());
+            return;
         }
 
-        // Restore the active workspace after removals may have shifted
-        // the index. If the previously active workspace was removed,
-        // fall back to the workspace whose key just changed.
-        if let ActiveWorkspace::Persistent(_) = &self.active_workspace {
-            let target = if self.workspaces.contains(&active_workspace) {
-                &active_workspace
+        if new_key_exists {
+            let active_key = self.active_workspace.read(cx).project_group_key(cx);
+            if active_key == *new_key {
+                self.project_groups.retain(|g| g.key != *old_key);
             } else {
-                workspace
-            };
-            if let Some(new_index) = self.workspaces.iter().position(|ws| ws == target) {
-                self.active_workspace = ActiveWorkspace::Persistent(new_index);
+                self.project_groups.retain(|g| g.key != *new_key);
+                if let Some(group) = self.project_groups.iter_mut().find(|g| g.key == *old_key) {
+                    group.key = new_key.clone();
+                }
+            }
+        } else {
+            if let Some(group) = self.project_groups.iter_mut().find(|g| g.key == *old_key) {
+                group.key = new_key.clone();
             }
         }
+    }
 
-        self.remove_stale_project_group_keys(cx);
+    /// Re-keys a project group and emits `ProjectGroupKeyUpdated` so
+    /// the sidebar can migrate thread metadata. Used for direct group
+    /// manipulation (add/remove folder) where no Project event fires.
+    fn update_project_group_key(
+        &mut self,
+        old_key: &ProjectGroupKey,
+        new_key: &ProjectGroupKey,
+        cx: &mut Context<Self>,
+    ) {
+        self.rekey_project_group(old_key, new_key, cx);
 
-        let old_main_paths = old_key.path_list().clone();
-        for added_path in new_paths.iter().filter(|p| !old_paths.contains(p)) {
-            cx.emit(MultiWorkspaceEvent::WorktreePathAdded {
-                old_main_paths: old_main_paths.clone(),
-                added_path: added_path.clone(),
-            });
-        }
-        for removed_path in old_paths.iter().filter(|p| !new_paths.contains(p)) {
-            cx.emit(MultiWorkspaceEvent::WorktreePathRemoved {
-                old_main_paths: old_main_paths.clone(),
-                removed_path: removed_path.clone(),
+        if old_key != new_key && !new_key.path_list().paths().is_empty() {
+            cx.emit(MultiWorkspaceEvent::ProjectGroupKeyUpdated {
+                old_key: old_key.clone(),
+                new_key: new_key.clone(),
             });
         }
-
-        self.serialize(cx);
-        cx.notify();
     }
 
-    fn add_project_group_key(&mut self, project_group_key: ProjectGroupKey) {
-        if project_group_key.path_list().paths().is_empty() {
-            return;
-        }
-        if self.project_group_keys.contains(&project_group_key) {
+    pub(crate) fn retain_workspace(
+        &mut self,
+        workspace: Entity<Workspace>,
+        key: ProjectGroupKey,
+        cx: &mut Context<Self>,
+    ) {
+        self.ensure_project_group_state(key);
+        if self.is_workspace_retained(&workspace) {
             return;
         }
-        // Store newest first so the vec is in "most recently added"
-        self.project_group_keys.insert(0, project_group_key);
+
+        self.retained_workspaces.push(workspace.clone());
+        cx.emit(MultiWorkspaceEvent::WorkspaceAdded(workspace));
     }
 
-    pub(crate) fn set_workspace_group_key(
+    fn register_workspace(
         &mut self,
         workspace: &Entity<Workspace>,
-        project_group_key: ProjectGroupKey,
+        window: &Window,
+        cx: &mut Context<Self>,
     ) {
-        self.workspace_group_keys
-            .insert(workspace.entity_id(), project_group_key.clone());
-        self.add_project_group_key(project_group_key);
+        Self::subscribe_to_workspace(workspace, window, cx);
+        self.sync_sidebar_to_workspace(workspace, cx);
+        let weak_self = cx.weak_entity();
+        workspace.update(cx, |workspace, cx| {
+            workspace.set_multi_workspace(weak_self, cx);
+        });
     }
 
     pub fn project_group_key_for_workspace(
@@ -730,92 +721,139 @@ impl MultiWorkspace {
         workspace: &Entity<Workspace>,
         cx: &App,
     ) -> ProjectGroupKey {
-        self.workspace_group_keys
-            .get(&workspace.entity_id())
-            .cloned()
-            .unwrap_or_else(|| workspace.read(cx).project_group_key(cx))
-    }
-
-    fn remove_stale_project_group_keys(&mut self, cx: &App) {
-        let workspace_keys: HashSet<ProjectGroupKey> = self
-            .workspaces
-            .iter()
-            .map(|workspace| self.project_group_key_for_workspace(workspace, cx))
-            .collect();
-        self.project_group_keys
-            .retain(|key| workspace_keys.contains(key));
+        workspace.read(cx).project_group_key(cx)
     }
 
-    pub fn restore_project_group_keys(&mut self, keys: Vec<ProjectGroupKey>) {
-        let mut restored: Vec<ProjectGroupKey> = Vec::with_capacity(keys.len());
-        for key in keys {
+    pub fn restore_project_groups(
+        &mut self,
+        groups: Vec<SerializedProjectGroupState>,
+        _cx: &mut Context<Self>,
+    ) {
+        let mut restored: Vec<ProjectGroupState> = Vec::new();
+        for SerializedProjectGroupState {
+            key,
+            expanded,
+            visible_thread_count,
+        } in groups
+        {
             if key.path_list().paths().is_empty() {
                 continue;
             }
-            if !restored.contains(&key) {
-                restored.push(key);
+            if restored.iter().any(|group| group.key == key) {
+                continue;
             }
+            restored.push(ProjectGroupState {
+                key,
+                expanded,
+                visible_thread_count,
+            });
         }
-        for existing_key in &self.project_group_keys {
-            if !restored.contains(existing_key) {
-                restored.push(existing_key.clone());
+        for existing in std::mem::take(&mut self.project_groups) {
+            if !restored.iter().any(|group| group.key == existing.key) {
+                restored.push(existing);
             }
         }
-        self.project_group_keys = restored;
+        self.project_groups = restored;
     }
 
-    pub fn project_group_keys(&self) -> impl Iterator<Item = &ProjectGroupKey> {
-        self.project_group_keys.iter()
+    pub fn project_group_keys(&self) -> Vec<ProjectGroupKey> {
+        self.project_groups
+            .iter()
+            .map(|group| group.key.clone())
+            .collect()
     }
 
-    /// Returns the project groups, ordered by most recently added.
-    pub fn project_groups(
-        &self,
-        cx: &App,
-    ) -> impl Iterator<Item = (ProjectGroupKey, Vec<Entity<Workspace>>)> {
-        let mut groups = self
-            .project_group_keys
+    fn derived_project_groups(&self, cx: &App) -> Vec<ProjectGroup> {
+        self.project_groups
             .iter()
-            .map(|key| (key.clone(), Vec::new()))
-            .collect::<Vec<_>>();
-        for workspace in &self.workspaces {
-            let key = self.project_group_key_for_workspace(workspace, cx);
-            if let Some((_, workspaces)) = groups.iter_mut().find(|(k, _)| k == &key) {
-                workspaces.push(workspace.clone());
-            }
+            .map(|group| ProjectGroup {
+                key: group.key.clone(),
+                workspaces: self
+                    .retained_workspaces
+                    .iter()
+                    .filter(|workspace| workspace.read(cx).project_group_key(cx) == group.key)
+                    .cloned()
+                    .collect(),
+                expanded: group.expanded,
+                visible_thread_count: group.visible_thread_count,
+            })
+            .collect()
+    }
+
+    pub fn project_groups(&self, cx: &App) -> Vec<ProjectGroup> {
+        self.derived_project_groups(cx)
+    }
+
+    pub fn group_state_by_key(&self, key: &ProjectGroupKey) -> Option<&ProjectGroupState> {
+        self.project_groups.iter().find(|group| group.key == *key)
+    }
+
+    pub fn group_state_by_key_mut(
+        &mut self,
+        key: &ProjectGroupKey,
+    ) -> Option<&mut ProjectGroupState> {
+        self.project_groups
+            .iter_mut()
+            .find(|group| group.key == *key)
+    }
+
+    pub fn set_all_groups_expanded(&mut self, expanded: bool) {
+        for group in &mut self.project_groups {
+            group.expanded = expanded;
+        }
+    }
+
+    pub fn set_all_groups_visible_thread_count(&mut self, count: Option<usize>) {
+        for group in &mut self.project_groups {
+            group.visible_thread_count = count;
         }
-        groups.into_iter()
     }
 
     pub fn workspaces_for_project_group(
         &self,
-        project_group_key: &ProjectGroupKey,
+        key: &ProjectGroupKey,
         cx: &App,
-    ) -> impl Iterator<Item = &Entity<Workspace>> {
-        self.workspaces.iter().filter(move |workspace| {
-            self.project_group_key_for_workspace(workspace, cx) == *project_group_key
+    ) -> Option<Vec<Entity<Workspace>>> {
+        let has_group = self.project_groups.iter().any(|group| group.key == *key)
+            || self
+                .retained_workspaces
+                .iter()
+                .any(|workspace| workspace.read(cx).project_group_key(cx) == *key);
+
+        has_group.then(|| {
+            self.retained_workspaces
+                .iter()
+                .filter(|workspace| workspace.read(cx).project_group_key(cx) == *key)
+                .cloned()
+                .collect()
         })
     }
 
     pub fn remove_folder_from_project_group(
         &mut self,
-        project_group_key: &ProjectGroupKey,
+        group_key: &ProjectGroupKey,
         path: &Path,
         cx: &mut Context<Self>,
     ) {
-        let new_path_list = project_group_key.path_list().without_path(path);
+        let workspaces = self
+            .workspaces_for_project_group(group_key, cx)
+            .unwrap_or_default();
+
+        let Some(group) = self
+            .project_groups
+            .iter()
+            .find(|group| group.key == *group_key)
+        else {
+            return;
+        };
+
+        let new_path_list = group.key.path_list().without_path(path);
         if new_path_list.is_empty() {
             return;
         }
 
-        let new_key = ProjectGroupKey::new(project_group_key.host(), new_path_list);
-
-        let workspaces: Vec<_> = self
-            .workspaces_for_project_group(project_group_key, cx)
-            .cloned()
-            .collect();
-
-        self.add_project_group_key(new_key);
+        let new_key = ProjectGroupKey::new(group.key.host(), new_path_list);
+        self.update_project_group_key(group_key, &new_key, cx);
 
         for workspace in workspaces {
             let project = workspace.read(cx).project().clone();
@@ -830,7 +868,7 @@ impl MultiWorkspace {
 
     pub fn prompt_to_add_folders_to_project_group(
         &mut self,
-        key: &ProjectGroupKey,
+        group_key: ProjectGroupKey,
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
@@ -848,12 +886,11 @@ impl MultiWorkspace {
             )
         });
 
-        let key = key.clone();
         cx.spawn_in(window, async move |this, cx| {
             if let Some(new_paths) = paths.await.ok().flatten() {
                 if !new_paths.is_empty() {
                     this.update(cx, |multi_workspace, cx| {
-                        multi_workspace.add_folders_to_project_group(&key, new_paths, cx);
+                        multi_workspace.add_folders_to_project_group(&group_key, new_paths, cx);
                     })?;
                 }
             }
@@ -864,21 +901,38 @@ impl MultiWorkspace {
 
     pub fn add_folders_to_project_group(
         &mut self,
-        project_group_key: &ProjectGroupKey,
+        group_key: &ProjectGroupKey,
         new_paths: Vec<PathBuf>,
         cx: &mut Context<Self>,
     ) {
-        let mut all_paths: Vec<PathBuf> = project_group_key.path_list().paths().to_vec();
-        all_paths.extend(new_paths.iter().cloned());
-        let new_path_list = PathList::new(&all_paths);
-        let new_key = ProjectGroupKey::new(project_group_key.host(), new_path_list);
+        let workspaces = self
+            .workspaces_for_project_group(group_key, cx)
+            .unwrap_or_default();
 
-        let workspaces: Vec<_> = self
-            .workspaces_for_project_group(project_group_key, cx)
-            .cloned()
+        let Some(group) = self
+            .project_groups
+            .iter()
+            .find(|group| group.key == *group_key)
+        else {
+            return;
+        };
+
+        let existing_paths = group.key.path_list().paths();
+        let new_paths: Vec<PathBuf> = new_paths
+            .into_iter()
+            .filter(|p| !existing_paths.contains(p))
             .collect();
 
-        self.add_project_group_key(new_key);
+        if new_paths.is_empty() {
+            return;
+        }
+
+        let mut all_paths: Vec<PathBuf> = existing_paths.to_vec();
+        all_paths.extend(new_paths.iter().cloned());
+        let new_path_list = PathList::new(&all_paths);
+        let new_key = ProjectGroupKey::new(group.key.host(), new_path_list);
+
+        self.update_project_group_key(group_key, &new_key, cx);
 
         for workspace in workspaces {
             let project = workspace.read(cx).project().clone();
@@ -897,31 +951,28 @@ impl MultiWorkspace {
 
     pub fn remove_project_group(
         &mut self,
-        key: &ProjectGroupKey,
+        group_key: &ProjectGroupKey,
         window: &mut Window,
         cx: &mut Context<Self>,
     ) -> Task<Result<bool>> {
-        let workspaces: Vec<_> = self
-            .workspaces_for_project_group(key, cx)
-            .cloned()
-            .collect();
-
-        // Compute the neighbor while the key is still in the list.
-        let neighbor_key = {
-            let pos = self.project_group_keys.iter().position(|k| k == key);
-            pos.and_then(|pos| {
-                // Keys are in display order, so pos+1 is below
-                // and pos-1 is above. Try below first.
-                self.project_group_keys.get(pos + 1).or_else(|| {
-                    pos.checked_sub(1)
-                        .and_then(|i| self.project_group_keys.get(i))
-                })
-            })
-            .cloned()
-        };
+        let pos = self
+            .project_groups
+            .iter()
+            .position(|group| group.key == *group_key);
+        let workspaces = self
+            .workspaces_for_project_group(group_key, cx)
+            .unwrap_or_default();
+
+        // Compute the neighbor while the group is still in the list.
+        let neighbor_key = pos.and_then(|pos| {
+            self.project_groups
+                .get(pos + 1)
+                .or_else(|| pos.checked_sub(1).and_then(|i| self.project_groups.get(i)))
+                .map(|group| group.key.clone())
+        });
 
-        // Now remove the key.
-        self.project_group_keys.retain(|k| k != key);
+        // Now remove the group.
+        self.project_groups.retain(|group| group.key != *group_key);
 
         self.remove(
             workspaces,
@@ -962,14 +1013,17 @@ impl MultiWorkspace {
         host: Option<&RemoteConnectionOptions>,
         cx: &App,
     ) -> Option<Entity<Workspace>> {
-        self.workspaces
-            .iter()
-            .find(|ws| {
-                let key = ws.read(cx).project_group_key(cx);
-                key.host().as_ref() == host
-                    && PathList::new(&ws.read(cx).root_paths(cx)) == *path_list
-            })
-            .cloned()
+        for workspace in self.workspaces() {
+            let root_paths = PathList::new(&workspace.read(cx).root_paths(cx));
+            let key = workspace.read(cx).project_group_key(cx);
+            let host_matches = key.host().as_ref() == host;
+            let paths_match = root_paths == *path_list;
+            if host_matches && paths_match {
+                return Some(workspace.clone());
+            }
+        }
+
+        None
     }
 
     /// Finds an existing workspace whose paths match, or creates a new one.
@@ -1068,12 +1122,6 @@ impl MultiWorkspace {
             return Task::ready(Ok(workspace));
         }
 
-        if let Some(transient) = self.active_workspace.transient_workspace() {
-            if transient.read(cx).project_group_key(cx).path_list() == &path_list {
-                return Task::ready(Ok(transient.clone()));
-            }
-        }
-
         let paths = path_list.paths().to_vec();
         let app_state = self.workspace().read(cx).app_state().clone();
         let requesting_window = window.window_handle().downcast::<MultiWorkspace>();
@@ -1097,16 +1145,14 @@ impl MultiWorkspace {
     }
 
     pub fn workspace(&self) -> &Entity<Workspace> {
-        match &self.active_workspace {
-            ActiveWorkspace::Persistent(index) => &self.workspaces[*index],
-            ActiveWorkspace::Transient(workspace) => workspace,
-        }
+        &self.active_workspace
     }
 
     pub fn workspaces(&self) -> impl Iterator<Item = &Entity<Workspace>> {
-        self.workspaces
+        let active_is_retained = self.is_workspace_retained(&self.active_workspace);
+        self.retained_workspaces
             .iter()
-            .chain(self.active_workspace.transient_workspace())
+            .chain(std::iter::once(&self.active_workspace).filter(move |_| !active_is_retained))
     }
 
     /// Adds a workspace to this window as persistent without changing which
@@ -1114,7 +1160,17 @@ impl MultiWorkspace {
     /// persistent list regardless of sidebar state — it's used for system-
     /// initiated additions like deserialization and worktree discovery.
     pub fn add(&mut self, workspace: Entity<Workspace>, window: &Window, cx: &mut Context<Self>) {
-        self.insert_workspace(workspace, window, cx);
+        if self.is_workspace_retained(&workspace) {
+            return;
+        }
+
+        if workspace != self.active_workspace {
+            self.register_workspace(&workspace, window, cx);
+        }
+
+        let key = workspace.read(cx).project_group_key(cx);
+        self.retain_workspace(workspace, key, cx);
+        cx.notify();
     }
 
     /// Ensures the workspace is in the multiworkspace and makes it the active one.
@@ -1124,41 +1180,30 @@ impl MultiWorkspace {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) {
-        // Re-activating the current workspace is a no-op.
         if self.workspace() == &workspace {
             self.focus_active_workspace(window, cx);
             return;
         }
 
-        // Resolve where we're going.
-        let new_index = if let Some(index) = self.workspaces.iter().position(|w| *w == workspace) {
-            Some(index)
-        } else if self.sidebar_open {
-            Some(self.insert_workspace(workspace.clone(), &*window, cx))
-        } else {
-            None
-        };
+        let old_active_workspace = self.active_workspace.clone();
+        let old_active_was_retained = self.active_workspace_is_retained();
+        let workspace_was_retained = self.is_workspace_retained(&workspace);
 
-        // Transition the active workspace.
-        if let Some(index) = new_index {
-            if let Some(old) = self.active_workspace.set_persistent(index) {
-                if self.sidebar_open {
-                    self.promote_transient(old, cx);
-                } else {
-                    self.detach_workspace(&old, cx);
-                }
-            }
-        } else {
-            Self::subscribe_to_workspace(&workspace, window, cx);
-            let weak_self = cx.weak_entity();
-            workspace.update(cx, |workspace, cx| {
-                workspace.set_multi_workspace(weak_self, cx);
-            });
-            if let Some(old) = self.active_workspace.set_transient(workspace) {
-                self.detach_workspace(&old, cx);
+        if !workspace_was_retained {
+            self.register_workspace(&workspace, window, cx);
+
+            if self.sidebar_open {
+                let key = workspace.read(cx).project_group_key(cx);
+                self.retain_workspace(workspace.clone(), key, cx);
             }
         }
 
+        self.active_workspace = workspace;
+
+        if !self.sidebar_open && !old_active_was_retained {
+            self.detach_workspace(&old_active_workspace, cx);
+        }
+
         cx.emit(MultiWorkspaceEvent::ActiveWorkspaceChanged);
         self.serialize(cx);
         self.focus_active_workspace(window, cx);
@@ -1169,77 +1214,42 @@ impl MultiWorkspace {
     /// transient, so it is retained across workspace switches even when
     /// the sidebar is closed. No-op if the workspace is already persistent.
     pub fn retain_active_workspace(&mut self, cx: &mut Context<Self>) {
-        if let ActiveWorkspace::Transient(workspace) = &self.active_workspace {
-            let workspace = workspace.clone();
-            let index = self.promote_transient(workspace, cx);
-            self.active_workspace = ActiveWorkspace::Persistent(index);
-            self.serialize(cx);
-            cx.notify();
+        let workspace = self.active_workspace.clone();
+        if self.is_workspace_retained(&workspace) {
+            return;
         }
-    }
 
-    /// Promotes a former transient workspace into the persistent list.
-    /// Returns the index of the newly inserted workspace.
-    fn promote_transient(&mut self, workspace: Entity<Workspace>, cx: &mut Context<Self>) -> usize {
-        let project_group_key = self.project_group_key_for_workspace(&workspace, cx);
-        self.set_workspace_group_key(&workspace, project_group_key);
-        self.workspaces.push(workspace.clone());
-        cx.emit(MultiWorkspaceEvent::WorkspaceAdded(workspace));
-        self.workspaces.len() - 1
+        let key = workspace.read(cx).project_group_key(cx);
+        self.retain_workspace(workspace, key, cx);
+        self.serialize(cx);
+        cx.notify();
     }
 
-    /// Collapses to a single transient workspace, discarding all persistent
-    /// workspaces. Used when multi-workspace is disabled (e.g. disable_ai).
+    /// Collapses to a single workspace, discarding all groups.
+    /// Used when multi-workspace is disabled (e.g. disable_ai).
     fn collapse_to_single_workspace(&mut self, window: &mut Window, cx: &mut Context<Self>) {
         if self.sidebar_open {
             self.close_sidebar(window, cx);
         }
-        let active = self.workspace().clone();
-        for workspace in std::mem::take(&mut self.workspaces) {
-            if workspace != active {
+
+        let active_workspace = self.active_workspace.clone();
+        for workspace in self.retained_workspaces.clone() {
+            if workspace != active_workspace {
                 self.detach_workspace(&workspace, cx);
             }
         }
-        self.project_group_keys.clear();
-        self.workspace_group_keys.clear();
-        self.active_workspace = ActiveWorkspace::Transient(active);
-        cx.notify();
-    }
 
-    /// Inserts a workspace into the list if not already present. Returns the
-    /// index of the workspace (existing or newly inserted). Does not change
-    /// the active workspace index.
-    fn insert_workspace(
-        &mut self,
-        workspace: Entity<Workspace>,
-        window: &Window,
-        cx: &mut Context<Self>,
-    ) -> usize {
-        if let Some(index) = self.workspaces.iter().position(|w| *w == workspace) {
-            index
-        } else {
-            let project_group_key = self.project_group_key_for_workspace(&workspace, cx);
-
-            Self::subscribe_to_workspace(&workspace, window, cx);
-            self.sync_sidebar_to_workspace(&workspace, cx);
-            let weak_self = cx.weak_entity();
-            workspace.update(cx, |workspace, cx| {
-                workspace.set_multi_workspace(weak_self, cx);
-            });
-
-            self.set_workspace_group_key(&workspace, project_group_key);
-            self.workspaces.push(workspace.clone());
-            cx.emit(MultiWorkspaceEvent::WorkspaceAdded(workspace));
-            cx.notify();
-            self.workspaces.len() - 1
-        }
+        self.retained_workspaces.clear();
+        self.project_groups.clear();
+        cx.notify();
     }
 
     /// Detaches a workspace: clears session state, DB binding, cached
     /// group key, and emits `WorkspaceRemoved`. The DB row is preserved
     /// so the workspace still appears in the recent-projects list.
     fn detach_workspace(&mut self, workspace: &Entity<Workspace>, cx: &mut Context<Self>) {
-        self.workspace_group_keys.remove(&workspace.entity_id());
+        self.retained_workspaces
+            .retain(|retained| retained != workspace);
         cx.emit(MultiWorkspaceEvent::WorkspaceRemoved(workspace.entity_id()));
         workspace.update(cx, |workspace, _cx| {
             workspace.session_id.take();
@@ -1268,16 +1278,22 @@ impl MultiWorkspace {
         }
     }
 
-    pub(crate) fn serialize(&mut self, cx: &mut Context<Self>) {
+    pub fn serialize(&mut self, cx: &mut Context<Self>) {
         self._serialize_task = Some(cx.spawn(async move |this, cx| {
             let Some((window_id, state)) = this
                 .read_with(cx, |this, cx| {
                     let state = MultiWorkspaceState {
                         active_workspace_id: this.workspace().read(cx).database_id(),
-                        project_group_keys: this
-                            .project_group_keys()
-                            .cloned()
-                            .map(Into::into)
+                        project_groups: this
+                            .project_groups
+                            .iter()
+                            .map(|group| {
+                                crate::persistence::model::SerializedProjectGroup::from_group(
+                                    &group.key,
+                                    group.expanded,
+                                    group.visible_thread_count,
+                                )
+                            })
                             .collect::<Vec<_>>(),
                         sidebar_open: this.sidebar_open,
                         sidebar_state: this.sidebar.as_ref().and_then(|s| s.serialized_state(cx)),
@@ -1415,42 +1431,31 @@ impl MultiWorkspace {
     }
 
     #[cfg(any(test, feature = "test-support"))]
-    pub fn assert_project_group_key_integrity(&self, cx: &App) -> anyhow::Result<()> {
-        let stored_keys: HashSet<&ProjectGroupKey> = self.project_group_keys().collect();
-
-        let workspace_group_keys: HashSet<&ProjectGroupKey> =
-            self.workspace_group_keys.values().collect();
-        let extra_keys = &workspace_group_keys - &stored_keys;
-        anyhow::ensure!(
-            extra_keys.is_empty(),
-            "workspace_group_keys values not in project_group_keys: {:?}",
-            extra_keys,
-        );
+    pub fn test_expand_all_groups(&mut self) {
+        self.set_all_groups_expanded(true);
+        self.set_all_groups_visible_thread_count(Some(10_000));
+    }
 
-        let cached_ids: HashSet<EntityId> = self.workspace_group_keys.keys().copied().collect();
-        let workspace_ids: HashSet<EntityId> =
-            self.workspaces.iter().map(|ws| ws.entity_id()).collect();
-        anyhow::ensure!(
-            cached_ids == workspace_ids,
-            "workspace_group_keys entity IDs don't match workspaces.\n\
-             only in cache: {:?}\n\
-             only in workspaces: {:?}",
-            &cached_ids - &workspace_ids,
-            &workspace_ids - &cached_ids,
-        );
+    #[cfg(any(test, feature = "test-support"))]
+    pub fn assert_project_group_key_integrity(&self, cx: &App) -> anyhow::Result<()> {
+        let mut retained_ids: collections::HashSet<EntityId> = Default::default();
+        for workspace in &self.retained_workspaces {
+            anyhow::ensure!(
+                retained_ids.insert(workspace.entity_id()),
+                "workspace {:?} is retained more than once",
+                workspace.entity_id(),
+            );
 
-        for workspace in self.workspaces() {
             let live_key = workspace.read(cx).project_group_key(cx);
-            let cached_key = &self.workspace_group_keys[&workspace.entity_id()];
             anyhow::ensure!(
-                *cached_key == live_key,
-                "workspace {:?} has live key {:?} but cached key {:?}",
+                self.project_groups
+                    .iter()
+                    .any(|group| group.key == live_key),
+                "workspace {:?} has live key {:?} but no project-group metadata",
                 workspace.entity_id(),
                 live_key,
-                cached_key,
             );
         }
-
         Ok(())
     }
 

crates/workspace/src/multi_workspace_tests.rs 🔗

@@ -1,9 +1,10 @@
 use std::path::PathBuf;
 
 use super::*;
+use client::proto;
 use fs::FakeFs;
 use gpui::TestAppContext;
-use project::{DisableAiSettings, ProjectGroupKey};
+use project::DisableAiSettings;
 use serde_json::json;
 use settings::SettingsStore;
 use util::path;
@@ -105,9 +106,9 @@ async fn test_project_group_keys_initial(cx: &mut TestAppContext) {
     });
 
     multi_workspace.read_with(cx, |mw, _cx| {
-        let keys: Vec<&ProjectGroupKey> = mw.project_group_keys().collect();
+        let keys: Vec<ProjectGroupKey> = mw.project_group_keys();
         assert_eq!(keys.len(), 1, "should have exactly one key on creation");
-        assert_eq!(*keys[0], expected_key);
+        assert_eq!(keys[0], expected_key);
     });
 }
 
@@ -135,7 +136,7 @@ async fn test_project_group_keys_add_workspace(cx: &mut TestAppContext) {
     });
 
     multi_workspace.read_with(cx, |mw, _cx| {
-        assert_eq!(mw.project_group_keys().count(), 1);
+        assert_eq!(mw.project_group_keys().len(), 1);
     });
 
     // Adding a workspace with a different project root adds a new key.
@@ -144,14 +145,14 @@ async fn test_project_group_keys_add_workspace(cx: &mut TestAppContext) {
     });
 
     multi_workspace.read_with(cx, |mw, _cx| {
-        let keys: Vec<&ProjectGroupKey> = mw.project_group_keys().collect();
+        let keys: Vec<ProjectGroupKey> = mw.project_group_keys();
         assert_eq!(
             keys.len(),
             2,
             "should have two keys after adding a second workspace"
         );
-        assert_eq!(*keys[0], key_b);
-        assert_eq!(*keys[1], key_a);
+        assert_eq!(keys[0], key_b);
+        assert_eq!(keys[1], key_a);
     });
 }
 
@@ -225,7 +226,7 @@ async fn test_project_group_keys_duplicate_not_added(cx: &mut TestAppContext) {
     });
 
     multi_workspace.read_with(cx, |mw, _cx| {
-        let keys: Vec<&ProjectGroupKey> = mw.project_group_keys().collect();
+        let keys: Vec<ProjectGroupKey> = mw.project_group_keys();
         assert_eq!(
             keys.len(),
             1,
@@ -233,3 +234,365 @@ async fn test_project_group_keys_duplicate_not_added(cx: &mut TestAppContext) {
         );
     });
 }
+
+#[gpui::test]
+async fn test_groups_with_same_paths_merge(cx: &mut TestAppContext) {
+    init_test(cx);
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/a", json!({ "file.txt": "" })).await;
+    fs.insert_tree("/b", json!({ "file.txt": "" })).await;
+    let project_a = Project::test(fs.clone(), ["/a".as_ref()], cx).await;
+    let project_b = Project::test(fs.clone(), ["/b".as_ref()], cx).await;
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project_a, window, cx));
+
+    // Open the sidebar so workspaces get grouped.
+    multi_workspace.update(cx, |mw, cx| {
+        mw.open_sidebar(cx);
+    });
+    cx.run_until_parked();
+
+    // Add a second workspace, creating group_b with path [/b].
+    let group_a_key = multi_workspace.update_in(cx, |mw, window, cx| {
+        let group_a_key = mw.project_groups(cx)[0].key.clone();
+        mw.test_add_workspace(project_b, window, cx);
+        group_a_key
+    });
+    cx.run_until_parked();
+
+    // Now add /b to group_a so it has [/a, /b].
+    multi_workspace.update(cx, |mw, cx| {
+        mw.add_folders_to_project_group(&group_a_key, vec!["/b".into()], cx);
+    });
+    cx.run_until_parked();
+
+    // Verify we have two groups.
+    multi_workspace.read_with(cx, |mw, cx| {
+        assert_eq!(
+            mw.project_groups(cx).len(),
+            2,
+            "should have two groups before the merge"
+        );
+    });
+
+    // After adding /b, group_a's key changed. Get the updated key.
+    let group_a_key_updated = multi_workspace.read_with(cx, |mw, cx| {
+        mw.project_groups(cx)
+            .iter()
+            .find(|g| g.key.path_list().paths().contains(&PathBuf::from("/a")))
+            .unwrap()
+            .key
+            .clone()
+    });
+
+    // Remove /a from group_a, making its key [/b] — same as group_b.
+    multi_workspace.update(cx, |mw, cx| {
+        mw.remove_folder_from_project_group(&group_a_key_updated, Path::new("/a"), cx);
+    });
+    cx.run_until_parked();
+
+    // The two groups now have identical keys [/b] and should have been merged.
+    multi_workspace.read_with(cx, |mw, cx| {
+        assert_eq!(
+            mw.project_groups(cx).len(),
+            1,
+            "groups with identical paths should be merged into one"
+        );
+    });
+}
+
+#[gpui::test]
+async fn test_adding_worktree_updates_project_group_key(cx: &mut TestAppContext) {
+    init_test(cx);
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/root_a", json!({ "file.txt": "" })).await;
+    fs.insert_tree("/root_b", json!({ "other.txt": "" })).await;
+    let project = Project::test(fs.clone(), ["/root_a".as_ref()], cx).await;
+
+    let initial_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+
+    // Open sidebar to retain the workspace and create the initial group.
+    multi_workspace.update(cx, |mw, cx| {
+        mw.open_sidebar(cx);
+    });
+    cx.run_until_parked();
+
+    multi_workspace.read_with(cx, |mw, _cx| {
+        let keys = mw.project_group_keys();
+        assert_eq!(keys.len(), 1);
+        assert_eq!(keys[0], initial_key);
+    });
+
+    // Add a second worktree to the project. This triggers WorktreeAdded →
+    // handle_workspace_key_change, which should update the group key.
+    project
+        .update(cx, |project, cx| {
+            project.find_or_create_worktree("/root_b", true, cx)
+        })
+        .await
+        .expect("adding worktree should succeed");
+    cx.run_until_parked();
+
+    let updated_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
+    assert_ne!(
+        initial_key, updated_key,
+        "adding a worktree should change the project group key"
+    );
+
+    multi_workspace.read_with(cx, |mw, _cx| {
+        let keys = mw.project_group_keys();
+        assert!(
+            keys.contains(&updated_key),
+            "should contain the updated key; got {keys:?}"
+        );
+    });
+}
+
+#[gpui::test]
+async fn test_find_or_create_local_workspace_reuses_active_workspace_when_sidebar_closed(
+    cx: &mut TestAppContext,
+) {
+    init_test(cx);
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/root_a", json!({ "file.txt": "" })).await;
+    let project = Project::test(fs, ["/root_a".as_ref()], cx).await;
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project, window, cx));
+
+    let active_workspace = multi_workspace.read_with(cx, |mw, cx| {
+        assert!(
+            mw.project_groups(cx).is_empty(),
+            "sidebar-closed setup should start with no retained project groups"
+        );
+        mw.workspace().clone()
+    });
+    let active_workspace_id = active_workspace.entity_id();
+
+    let workspace = multi_workspace
+        .update_in(cx, |mw, window, cx| {
+            mw.find_or_create_local_workspace(
+                PathList::new(&[PathBuf::from("/root_a")]),
+                window,
+                cx,
+            )
+        })
+        .await
+        .expect("reopening the same local workspace should succeed");
+
+    assert_eq!(
+        workspace.entity_id(),
+        active_workspace_id,
+        "should reuse the current active workspace when the sidebar is closed"
+    );
+
+    multi_workspace.read_with(cx, |mw, _cx| {
+        assert_eq!(
+            mw.workspace().entity_id(),
+            active_workspace_id,
+            "active workspace should remain unchanged after reopening the same path"
+        );
+        assert_eq!(
+            mw.workspaces().count(),
+            1,
+            "reusing the active workspace should not create a second open workspace"
+        );
+    });
+}
+
+#[gpui::test]
+async fn test_find_or_create_local_workspace_reuses_active_workspace_after_sidebar_open(
+    cx: &mut TestAppContext,
+) {
+    init_test(cx);
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/root_a", json!({ "file.txt": "" })).await;
+    let project = Project::test(fs, ["/root_a".as_ref()], cx).await;
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project, window, cx));
+
+    multi_workspace.update(cx, |mw, cx| {
+        mw.open_sidebar(cx);
+    });
+    cx.run_until_parked();
+
+    let active_workspace = multi_workspace.read_with(cx, |mw, cx| {
+        assert_eq!(
+            mw.project_groups(cx).len(),
+            1,
+            "opening the sidebar should retain the active workspace in a project group"
+        );
+        mw.workspace().clone()
+    });
+    let active_workspace_id = active_workspace.entity_id();
+
+    let workspace = multi_workspace
+        .update_in(cx, |mw, window, cx| {
+            mw.find_or_create_local_workspace(
+                PathList::new(&[PathBuf::from("/root_a")]),
+                window,
+                cx,
+            )
+        })
+        .await
+        .expect("reopening the same retained local workspace should succeed");
+
+    assert_eq!(
+        workspace.entity_id(),
+        active_workspace_id,
+        "should reuse the retained active workspace after the sidebar is opened"
+    );
+
+    multi_workspace.read_with(cx, |mw, _cx| {
+        assert_eq!(
+            mw.workspaces().count(),
+            1,
+            "reopening the same retained workspace should not create another workspace"
+        );
+    });
+}
+
+#[gpui::test]
+async fn test_switching_projects_with_sidebar_closed_detaches_old_active_workspace(
+    cx: &mut TestAppContext,
+) {
+    init_test(cx);
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/root_a", json!({ "file_a.txt": "" })).await;
+    fs.insert_tree("/root_b", json!({ "file_b.txt": "" })).await;
+    let project_a = Project::test(fs.clone(), ["/root_a".as_ref()], cx).await;
+    let project_b = Project::test(fs, ["/root_b".as_ref()], cx).await;
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project_a, window, cx));
+
+    let workspace_a = multi_workspace.read_with(cx, |mw, cx| {
+        assert!(
+            mw.project_groups(cx).is_empty(),
+            "sidebar-closed setup should start with no retained project groups"
+        );
+        mw.workspace().clone()
+    });
+    assert!(
+        workspace_a.read_with(cx, |workspace, _cx| workspace.session_id().is_some()),
+        "initial active workspace should start attached to the session"
+    );
+
+    let workspace_b = multi_workspace.update_in(cx, |mw, window, cx| {
+        mw.test_add_workspace(project_b, window, cx)
+    });
+    cx.run_until_parked();
+
+    multi_workspace.read_with(cx, |mw, _cx| {
+        assert_eq!(
+            mw.workspace().entity_id(),
+            workspace_b.entity_id(),
+            "the new workspace should become active"
+        );
+        assert_eq!(
+            mw.workspaces().count(),
+                        1,
+                        "only the new active workspace should remain open after switching with the sidebar closed"
+        );
+    });
+
+    assert!(
+        workspace_a.read_with(cx, |workspace, _cx| workspace.session_id().is_none()),
+        "the previous active workspace should be detached when switching away with the sidebar closed"
+    );
+}
+
+#[gpui::test]
+async fn test_remote_worktree_without_git_updates_project_group(cx: &mut TestAppContext) {
+    init_test(cx);
+    let fs = FakeFs::new(cx.executor());
+    fs.insert_tree("/local", json!({ "file.txt": "" })).await;
+    let project = Project::test(fs.clone(), ["/local".as_ref()], cx).await;
+
+    let (multi_workspace, cx) =
+        cx.add_window_view(|window, cx| MultiWorkspace::test_new(project.clone(), window, cx));
+
+    multi_workspace.update(cx, |mw, cx| {
+        mw.open_sidebar(cx);
+    });
+    cx.run_until_parked();
+
+    let initial_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
+    multi_workspace.read_with(cx, |mw, _cx| {
+        let keys = mw.project_group_keys();
+        assert_eq!(keys.len(), 1);
+        assert_eq!(keys[0], initial_key);
+    });
+
+    // Add a remote worktree without git repo info.
+    let remote_worktree = project.update(cx, |project, cx| {
+        project.add_test_remote_worktree("/remote/project", cx)
+    });
+    cx.run_until_parked();
+
+    // The remote worktree has no entries yet, so project_group_key should
+    // still exclude it.
+    let key_after_add = project.read_with(cx, |p, cx| p.project_group_key(cx));
+    assert_eq!(
+        key_after_add, initial_key,
+        "remote worktree without entries should not affect the group key"
+    );
+
+    // Send an UpdateWorktree to the remote worktree with entries but no repo.
+    // This triggers UpdatedRootRepoCommonDir on the first update (the fix),
+    // which propagates through WorktreeStore → Project → MultiWorkspace.
+    let worktree_id = remote_worktree.read_with(cx, |wt, _| wt.id().to_proto());
+    remote_worktree.update(cx, |worktree, _cx| {
+        worktree
+            .as_remote()
+            .unwrap()
+            .update_from_remote(proto::UpdateWorktree {
+                project_id: 0,
+                worktree_id,
+                abs_path: "/remote/project".to_string(),
+                root_name: "project".to_string(),
+                updated_entries: vec![proto::Entry {
+                    id: 1,
+                    is_dir: true,
+                    path: "".to_string(),
+                    inode: 1,
+                    mtime: Some(proto::Timestamp {
+                        seconds: 0,
+                        nanos: 0,
+                    }),
+                    is_ignored: false,
+                    is_hidden: false,
+                    is_external: false,
+                    is_fifo: false,
+                    size: None,
+                    canonical_path: None,
+                }],
+                removed_entries: vec![],
+                scan_id: 1,
+                is_last_update: true,
+                updated_repositories: vec![],
+                removed_repositories: vec![],
+                root_repo_common_dir: None,
+            });
+    });
+    cx.run_until_parked();
+
+    let updated_key = project.read_with(cx, |p, cx| p.project_group_key(cx));
+    assert_ne!(
+        initial_key, updated_key,
+        "adding a remote worktree should change the project group key"
+    );
+
+    multi_workspace.read_with(cx, |mw, _cx| {
+        let keys = mw.project_group_keys();
+        assert!(
+            keys.contains(&updated_key),
+            "should contain the updated key; got {keys:?}"
+        );
+    });
+}

crates/workspace/src/persistence.rs 🔗

@@ -2495,6 +2495,7 @@ pub fn delete_unloaded_items(
 #[cfg(test)]
 mod tests {
     use super::*;
+    use crate::ProjectGroupKey;
     use crate::{
         multi_workspace::MultiWorkspace,
         persistence::{
@@ -2508,7 +2509,7 @@ mod tests {
     use feature_flags::FeatureFlagAppExt;
     use gpui::AppContext as _;
     use pretty_assertions::assert_eq;
-    use project::{Project, ProjectGroupKey};
+    use project::Project;
     use remote::SshConnectionOptions;
     use serde_json::json;
     use std::{thread, time::Duration};
@@ -2567,10 +2568,14 @@ mod tests {
              the newly activated workspace's database id"
         );
 
-        // --- Remove the first workspace (index 0, which is not the active one) ---
-        multi_workspace.update_in(cx, |mw, window, cx| {
-            let ws = mw.workspaces().nth(0).unwrap().clone();
-            mw.remove([ws], |_, _, _| unreachable!(), window, cx)
+        // --- Remove the non-active workspace ---
+        multi_workspace.update_in(cx, |mw, _window, cx| {
+            let active = mw.workspace().clone();
+            let ws = mw
+                .workspaces()
+                .find(|ws| *ws != &active)
+                .expect("should have a non-active workspace");
+            mw.remove([ws.clone()], |_, _, _| unreachable!(), _window, cx)
                 .detach_and_log_err(cx);
         });
 
@@ -4007,7 +4012,7 @@ mod tests {
             window_10,
             MultiWorkspaceState {
                 active_workspace_id: Some(WorkspaceId(2)),
-                project_group_keys: vec![],
+                project_groups: vec![],
                 sidebar_open: true,
                 sidebar_state: None,
             },
@@ -4019,7 +4024,7 @@ mod tests {
             window_20,
             MultiWorkspaceState {
                 active_workspace_id: Some(WorkspaceId(3)),
-                project_group_keys: vec![],
+                project_groups: vec![],
                 sidebar_open: false,
                 sidebar_state: None,
             },
@@ -4771,38 +4776,8 @@ mod tests {
             mw.test_add_workspace(project_1_linked_worktree.clone(), window, cx)
         });
 
-        // Assign database IDs and set up session bindings so serialization
-        // writes real rows.
-        multi_workspace.update_in(cx, |mw, _, cx| {
-            for workspace in mw.workspaces() {
-                workspace.update(cx, |ws, _cx| {
-                    ws.set_random_database_id();
-                });
-            }
-        });
-
-        // Flush serialization for each individual workspace (writes to SQLite)
-        // and for the MultiWorkspace (writes to KVP).
-        let tasks = multi_workspace.update_in(cx, |mw, window, cx| {
-            let session_id = mw.workspace().read(cx).session_id();
-            let window_id_u64 = window.window_handle().window_id().as_u64();
-
-            let mut tasks: Vec<Task<()>> = Vec::new();
-            for workspace in mw.workspaces() {
-                tasks.push(workspace.update(cx, |ws, cx| ws.flush_serialization(window, cx)));
-                if let Some(db_id) = workspace.read(cx).database_id() {
-                    let db = WorkspaceDb::global(cx);
-                    let session_id = session_id.clone();
-                    tasks.push(cx.background_spawn(async move {
-                        db.set_session_binding(db_id, session_id, Some(window_id_u64))
-                            .await
-                            .log_err();
-                    }));
-                }
-            }
-            mw.serialize(cx);
-            tasks
-        });
+        let tasks =
+            multi_workspace.update_in(cx, |mw, window, cx| mw.flush_all_serialization(window, cx));
         cx.run_until_parked();
         for task in tasks {
             task.await;
@@ -4843,13 +4818,13 @@ mod tests {
             serialized.active_workspace.workspace_id,
             active_db_id.unwrap(),
         );
-        assert_eq!(serialized.state.project_group_keys.len(), 2,);
+        assert_eq!(serialized.state.project_groups.len(), 2,);
 
         // Verify the serialized project group keys round-trip back to the
         // originals.
         let restored_keys: Vec<ProjectGroupKey> = serialized
             .state
-            .project_group_keys
+            .project_groups
             .iter()
             .cloned()
             .map(Into::into)
@@ -4882,9 +4857,7 @@ mod tests {
 
         // The restored window should have the same project group keys.
         let restored_keys: Vec<ProjectGroupKey> = restored_handle
-            .read_with(cx, |mw: &MultiWorkspace, _cx| {
-                mw.project_group_keys().cloned().collect()
-            })
+            .read_with(cx, |mw: &MultiWorkspace, _cx| mw.project_group_keys())
             .unwrap();
         assert_eq!(
             restored_keys, expected_keys,
@@ -4928,7 +4901,7 @@ mod tests {
         let project_c = Project::test(fs.clone(), [dir_c.as_path()], cx).await;
 
         // Create a multi-workspace with project A, then add B and C.
-        // project_group_keys stores newest first: [C, B, A].
+        // project_groups stores newest first: [C, B, A].
         // Sidebar displays in the same order: C (top), B (middle), A (bottom).
         let (multi_workspace, cx) = cx
             .add_window_view(|window, cx| MultiWorkspace::test_new(project_a.clone(), window, cx));
@@ -4976,7 +4949,7 @@ mod tests {
         // Activate workspace A (the bottom) so removing it tests the
         // "fall back upward" path.
         let workspace_a =
-            multi_workspace.read_with(cx, |mw, _| mw.workspaces().next().cloned().unwrap());
+            multi_workspace.read_with(cx, |mw, _cx| mw.workspaces().next().unwrap().clone());
         multi_workspace.update_in(cx, |mw, window, cx| {
             mw.activate(workspace_a.clone(), window, cx);
         });

crates/workspace/src/persistence/model.rs 🔗

@@ -1,7 +1,7 @@
 use super::{SerializedAxis, SerializedWindowBounds};
 use crate::{
     Member, Pane, PaneAxis, SerializableItemRegistry, Workspace, WorkspaceId, item::ItemHandle,
-    path_list::PathList,
+    multi_workspace::SerializedProjectGroupState, path_list::PathList,
 };
 use anyhow::{Context, Result};
 use async_recursion::async_recursion;
@@ -12,8 +12,9 @@ use db::sqlez::{
 };
 use gpui::{AsyncWindowContext, Entity, WeakEntity, WindowId};
 
+use crate::ProjectGroupKey;
 use language::{Toolchain, ToolchainScope};
-use project::{Project, ProjectGroupKey, debugger::breakpoint_store::SourceBreakpoint};
+use project::{Project, debugger::breakpoint_store::SourceBreakpoint};
 use remote::RemoteConnectionOptions;
 use serde::{Deserialize, Serialize};
 use std::{
@@ -60,31 +61,53 @@ pub struct SessionWorkspace {
 }
 
 #[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
-pub struct SerializedProjectGroupKey {
+pub struct SerializedProjectGroup {
     pub path_list: SerializedPathList,
     pub(crate) location: SerializedWorkspaceLocation,
+    #[serde(default = "default_expanded")]
+    pub expanded: bool,
+    #[serde(default)]
+    pub visible_thread_count: Option<usize>,
+}
+
+fn default_expanded() -> bool {
+    true
 }
 
-impl From<ProjectGroupKey> for SerializedProjectGroupKey {
-    fn from(value: ProjectGroupKey) -> Self {
-        SerializedProjectGroupKey {
-            path_list: value.path_list().serialize(),
-            location: match value.host() {
+impl SerializedProjectGroup {
+    pub fn from_group(
+        key: &ProjectGroupKey,
+        expanded: bool,
+        visible_thread_count: Option<usize>,
+    ) -> Self {
+        Self {
+            path_list: key.path_list().serialize(),
+            location: match key.host() {
                 Some(host) => SerializedWorkspaceLocation::Remote(host),
                 None => SerializedWorkspaceLocation::Local,
             },
+            expanded,
+            visible_thread_count,
         }
     }
-}
 
-impl From<SerializedProjectGroupKey> for ProjectGroupKey {
-    fn from(value: SerializedProjectGroupKey) -> Self {
-        let path_list = PathList::deserialize(&value.path_list);
-        let host = match value.location {
+    pub fn into_restored_state(self) -> SerializedProjectGroupState {
+        let path_list = PathList::deserialize(&self.path_list);
+        let host = match self.location {
             SerializedWorkspaceLocation::Local => None,
             SerializedWorkspaceLocation::Remote(opts) => Some(opts),
         };
-        ProjectGroupKey::new(host, path_list)
+        SerializedProjectGroupState {
+            key: ProjectGroupKey::new(host, path_list),
+            expanded: self.expanded,
+            visible_thread_count: self.visible_thread_count,
+        }
+    }
+}
+
+impl From<SerializedProjectGroup> for ProjectGroupKey {
+    fn from(value: SerializedProjectGroup) -> Self {
+        value.into_restored_state().key
     }
 }
 
@@ -93,7 +116,8 @@ impl From<SerializedProjectGroupKey> for ProjectGroupKey {
 pub struct MultiWorkspaceState {
     pub active_workspace_id: Option<WorkspaceId>,
     pub sidebar_open: bool,
-    pub project_group_keys: Vec<SerializedProjectGroupKey>,
+    #[serde(alias = "project_group_keys")]
+    pub project_groups: Vec<SerializedProjectGroup>,
     #[serde(default)]
     pub sidebar_state: Option<String>,
 }

crates/workspace/src/workspace.rs 🔗

@@ -33,8 +33,9 @@ pub use dock::Panel;
 pub use multi_workspace::{
     CloseWorkspaceSidebar, DraggedSidebar, FocusWorkspaceSidebar, MultiWorkspace,
     MultiWorkspaceEvent, NewThread, NextProject, NextThread, PreviousProject, PreviousThread,
-    ShowFewerThreads, ShowMoreThreads, Sidebar, SidebarEvent, SidebarHandle, SidebarRenderState,
-    SidebarSide, ToggleWorkspaceSidebar, sidebar_side_context_menu,
+    ProjectGroup, ProjectGroupKey, SerializedProjectGroupState, ShowFewerThreads, ShowMoreThreads,
+    Sidebar, SidebarEvent, SidebarHandle, SidebarRenderState, SidebarSide, ToggleWorkspaceSidebar,
+    sidebar_side_context_menu,
 };
 pub use path_list::{PathList, SerializedPathList};
 pub use toast_layer::{ToastAction, ToastLayer, ToastView};
@@ -86,14 +87,14 @@ pub use persistence::{
     WorkspaceDb, delete_unloaded_items,
     model::{
         DockStructure, ItemId, MultiWorkspaceState, SerializedMultiWorkspace,
-        SerializedProjectGroupKey, SerializedWorkspaceLocation, SessionWorkspace,
+        SerializedProjectGroup, SerializedWorkspaceLocation, SessionWorkspace,
     },
     read_serialized_multi_workspaces, resolve_worktree_workspaces,
 };
 use postage::stream::Stream;
 use project::{
-    DirectoryLister, Project, ProjectEntryId, ProjectGroupKey, ProjectPath, ResolvedPath, Worktree,
-    WorktreeId, WorktreeSettings,
+    DirectoryLister, Project, ProjectEntryId, ProjectPath, ResolvedPath, Worktree, WorktreeId,
+    WorktreeSettings,
     debugger::{breakpoint_store::BreakpointStoreEvent, session::ThreadStatus},
     project_settings::ProjectSettings,
     toolchain_store::ToolchainStoreEvent,
@@ -215,6 +216,16 @@ pub trait DebuggerProvider {
     fn active_thread_state(&self, cx: &App) -> Option<ThreadStatus>;
 }
 
+pub trait WorkspaceSidebarDelegate: Send + Sync {
+    fn reconcile_group(
+        &self,
+        workspace: &mut Workspace,
+        group_key: &ProjectGroupKey,
+        window: &mut Window,
+        cx: &mut Context<Workspace>,
+    ) -> bool;
+}
+
 /// Opens a file or directory.
 #[derive(Clone, PartialEq, Deserialize, JsonSchema, Action)]
 #[action(namespace = workspace)]
@@ -1371,6 +1382,7 @@ pub struct Workspace {
     _panels_task: Option<Task<Result<()>>>,
     sidebar_focus_handle: Option<FocusHandle>,
     multi_workspace: Option<WeakEntity<MultiWorkspace>>,
+    sidebar_delegate: Option<Arc<dyn WorkspaceSidebarDelegate>>,
 }
 
 impl EventEmitter<Event> for Workspace {}
@@ -1799,6 +1811,7 @@ impl Workspace {
             removing: false,
             sidebar_focus_handle: None,
             multi_workspace,
+            sidebar_delegate: None,
             open_in_dev_container: false,
             _dev_container_task: None,
         }
@@ -2459,6 +2472,14 @@ impl Workspace {
         self.multi_workspace = Some(multi_workspace);
     }
 
+    pub fn set_sidebar_delegate(&mut self, delegate: Arc<dyn WorkspaceSidebarDelegate>) {
+        self.sidebar_delegate = Some(delegate);
+    }
+
+    pub fn sidebar_delegate(&self) -> Option<Arc<dyn WorkspaceSidebarDelegate>> {
+        self.sidebar_delegate.clone()
+    }
+
     pub fn app_state(&self) -> &Arc<AppState> {
         &self.app_state
     }
@@ -2847,6 +2868,9 @@ impl Workspace {
         window: &mut Window,
         cx: &mut Context<Self>,
     ) -> oneshot::Receiver<Option<Vec<PathBuf>>> {
+        // TODO: If `on_prompt_for_open_path` is set, we should always use it
+        // rather than gating on `use_system_path_prompts`. This would let tests
+        // inject a mock without also having to disable the setting.
         if !lister.is_local(cx) || !WorkspaceSettings::get_global(cx).use_system_path_prompts {
             let prompt = self.on_prompt_for_open_path.take().unwrap();
             let rx = prompt(self, lister, window, cx);
@@ -8726,7 +8750,7 @@ pub async fn restore_multiworkspace(
             log::error!("Failed to restore active workspace: {err:#}");
 
             let mut fallback_handle = None;
-            for key in &state.project_group_keys {
+            for key in &state.project_groups {
                 let key: ProjectGroupKey = key.clone().into();
                 let paths = key.path_list().paths().to_vec();
                 match cx
@@ -8776,20 +8800,21 @@ pub async fn apply_restored_multiworkspace_state(
 ) {
     let MultiWorkspaceState {
         sidebar_open,
-        project_group_keys,
+        project_groups,
         sidebar_state,
         ..
     } = state;
 
-    if !project_group_keys.is_empty() {
+    if !project_groups.is_empty() {
         // Resolve linked worktree paths to their main repo paths so
         // stale keys from previous sessions get normalized and deduped.
-        let mut resolved_keys: Vec<ProjectGroupKey> = Vec::new();
-        for key in project_group_keys
-            .iter()
-            .cloned()
-            .map(ProjectGroupKey::from)
-        {
+        let mut resolved_groups: Vec<SerializedProjectGroupState> = Vec::new();
+        for serialized in project_groups.iter().cloned() {
+            let SerializedProjectGroupState {
+                key,
+                expanded,
+                visible_thread_count,
+            } = serialized.into_restored_state();
             if key.path_list().paths().is_empty() {
                 continue;
             }
@@ -8806,14 +8831,18 @@ pub async fn apply_restored_multiworkspace_state(
                 }
             }
             let resolved = ProjectGroupKey::new(key.host(), PathList::new(&resolved_paths));
-            if !resolved_keys.contains(&resolved) {
-                resolved_keys.push(resolved);
+            if !resolved_groups.iter().any(|g| g.key == resolved) {
+                resolved_groups.push(SerializedProjectGroupState {
+                    key: resolved,
+                    expanded,
+                    visible_thread_count,
+                });
             }
         }
 
         window_handle
-            .update(cx, |multi_workspace, _window, _cx| {
-                multi_workspace.restore_project_group_keys(resolved_keys);
+            .update(cx, |multi_workspace, _window, cx| {
+                multi_workspace.restore_project_groups(resolved_groups, cx);
             })
             .ok();
     }
@@ -9885,7 +9914,7 @@ async fn open_remote_project_inner(
         });
 
         if let Some(project_group_key) = provisional_project_group_key.clone() {
-            multi_workspace.set_workspace_group_key(&new_workspace, project_group_key);
+            multi_workspace.retain_workspace(new_workspace.clone(), project_group_key, cx);
         }
         multi_workspace.activate(new_workspace.clone(), window, cx);
         new_workspace
@@ -9957,14 +9986,13 @@ pub fn join_in_room_project(
                 .and_then(|window_handle| {
                     window_handle
                         .update(cx, |multi_workspace, _window, cx| {
-                            for workspace in multi_workspace.workspaces() {
-                                if workspace.read(cx).project().read(cx).remote_id()
-                                    == Some(project_id)
-                                {
-                                    return Some((window_handle, workspace.clone()));
-                                }
-                            }
-                            None
+                            multi_workspace
+                                .workspaces()
+                                .find(|workspace| {
+                                    workspace.read(cx).project().read(cx).remote_id()
+                                        == Some(project_id)
+                                })
+                                .map(|workspace| (window_handle, workspace.clone()))
                         })
                         .unwrap_or(None)
                 })
@@ -10915,8 +10943,7 @@ mod tests {
         // Activate workspace A
         multi_workspace_handle
             .update(cx, |mw, window, cx| {
-                let workspace = mw.workspaces().next().unwrap().clone();
-                mw.activate(workspace, window, cx);
+                mw.activate(workspace_a.clone(), window, cx);
             })
             .unwrap();
 
@@ -11039,7 +11066,7 @@ mod tests {
         assert!(!removed, "removal should have been cancelled");
 
         multi_workspace_handle
-            .read_with(cx, |mw, _| {
+            .read_with(cx, |mw, _cx| {
                 assert_eq!(
                     mw.workspace(),
                     &workspace_b,
@@ -11067,7 +11094,7 @@ mod tests {
 
         // Should be back on workspace A, and B should be gone.
         multi_workspace_handle
-            .read_with(cx, |mw, _| {
+            .read_with(cx, |mw, _cx| {
                 assert_eq!(
                     mw.workspace(),
                     &workspace_a,

crates/worktree/src/worktree.rs 🔗

@@ -165,6 +165,7 @@ pub struct RemoteWorktree {
     replica_id: ReplicaId,
     visible: bool,
     disconnected: bool,
+    received_initial_update: bool,
 }
 
 #[derive(Clone)]
@@ -370,7 +371,9 @@ struct UpdateObservationState {
 pub enum Event {
     UpdatedEntries(UpdatedEntriesSet),
     UpdatedGitRepositories(UpdatedGitRepositoriesSet),
-    UpdatedRootRepoCommonDir,
+    UpdatedRootRepoCommonDir {
+        old: Option<Arc<SanitizedPath>>,
+    },
     DeletedEntry(ProjectEntryId),
     /// The worktree root itself has been deleted (for single-file worktrees)
     Deleted,
@@ -550,6 +553,7 @@ impl Worktree {
                 snapshot_subscriptions: Default::default(),
                 visible: worktree.visible,
                 disconnected: false,
+                received_initial_update: false,
             };
 
             // Apply updates to a separate snapshot in a background task, then
@@ -574,9 +578,16 @@ impl Worktree {
             cx.spawn(async move |this, cx| {
                 while (snapshot_updated_rx.recv().await).is_some() {
                     this.update(cx, |this, cx| {
-                        let mut entries_changed = false;
                         let this = this.as_remote_mut().unwrap();
+
+                        // The watch channel delivers an initial signal before
+                        // any real updates arrive. Skip these spurious wakeups.
+                        if this.background_snapshot.lock().1.is_empty() {
+                            return;
+                        }
+
                         let old_root_repo_common_dir = this.snapshot.root_repo_common_dir.clone();
+                        let mut entries_changed = false;
                         {
                             let mut lock = this.background_snapshot.lock();
                             this.snapshot = lock.0.clone();
@@ -592,8 +603,14 @@ impl Worktree {
                         if entries_changed {
                             cx.emit(Event::UpdatedEntries(Arc::default()));
                         }
-                        if this.snapshot.root_repo_common_dir != old_root_repo_common_dir {
-                            cx.emit(Event::UpdatedRootRepoCommonDir);
+                        let is_first_update = !this.received_initial_update;
+                        this.received_initial_update = true;
+                        if this.snapshot.root_repo_common_dir != old_root_repo_common_dir
+                            || (is_first_update && this.snapshot.root_repo_common_dir.is_none())
+                        {
+                            cx.emit(Event::UpdatedRootRepoCommonDir {
+                                old: old_root_repo_common_dir,
+                            });
                         }
                         cx.notify();
                         while let Some((scan_id, _)) = this.snapshot_subscriptions.front() {
@@ -1221,8 +1238,9 @@ impl LocalWorktree {
             .local_repo_for_work_directory_path(RelPath::empty())
             .map(|repo| SanitizedPath::from_arc(repo.common_dir_abs_path.clone()));
 
-        let root_repo_common_dir_changed =
-            self.snapshot.root_repo_common_dir != new_snapshot.root_repo_common_dir;
+        let old_root_repo_common_dir = (self.snapshot.root_repo_common_dir
+            != new_snapshot.root_repo_common_dir)
+            .then(|| self.snapshot.root_repo_common_dir.clone());
         self.snapshot = new_snapshot;
 
         if let Some(share) = self.update_observer.as_mut() {
@@ -1238,8 +1256,8 @@ impl LocalWorktree {
         if !repo_changes.is_empty() {
             cx.emit(Event::UpdatedGitRepositories(repo_changes));
         }
-        if root_repo_common_dir_changed {
-            cx.emit(Event::UpdatedRootRepoCommonDir);
+        if let Some(old) = old_root_repo_common_dir {
+            cx.emit(Event::UpdatedRootRepoCommonDir { old });
         }
 
         while let Some((scan_id, _)) = self.snapshot_subscriptions.front() {

crates/worktree/tests/integration/main.rs 🔗

@@ -9,6 +9,7 @@ use parking_lot::Mutex;
 use postage::stream::Stream;
 use pretty_assertions::assert_eq;
 use rand::prelude::*;
+use rpc::{AnyProtoClient, NoopProtoClient, proto};
 use worktree::{Entry, EntryKind, Event, PathChange, Worktree, WorktreeModelHandle};
 
 use serde_json::json;
@@ -2807,7 +2808,7 @@ async fn test_root_repo_common_dir(executor: BackgroundExecutor, cx: &mut TestAp
         let event_count = event_count.clone();
         |_, cx| {
             cx.subscribe(&cx.entity(), move |_, _, event, _| {
-                if matches!(event, Event::UpdatedRootRepoCommonDir) {
+                if matches!(event, Event::UpdatedRootRepoCommonDir { .. }) {
                     event_count.set(event_count.get() + 1);
                 }
             })
@@ -3345,3 +3346,193 @@ async fn test_single_file_worktree_deleted(cx: &mut TestAppContext) {
         "Should receive Deleted event when single-file worktree root is deleted"
     );
 }
+
+#[gpui::test]
+async fn test_remote_worktree_without_git_emits_root_repo_event_after_first_update(
+    cx: &mut TestAppContext,
+) {
+    cx.update(|cx| {
+        let store = SettingsStore::test(cx);
+        cx.set_global(store);
+    });
+
+    let client = AnyProtoClient::new(NoopProtoClient::new());
+
+    let worktree = cx.update(|cx| {
+        Worktree::remote(
+            1,
+            clock::ReplicaId::new(1),
+            proto::WorktreeMetadata {
+                id: 1,
+                root_name: "project".to_string(),
+                visible: true,
+                abs_path: "/home/user/project".to_string(),
+                root_repo_common_dir: None,
+            },
+            client,
+            PathStyle::Posix,
+            cx,
+        )
+    });
+
+    let events: Arc<std::sync::Mutex<Vec<&'static str>>> =
+        Arc::new(std::sync::Mutex::new(Vec::new()));
+    let events_clone = events.clone();
+    cx.update(|cx| {
+        cx.subscribe(&worktree, move |_, event, _cx| {
+            if matches!(event, Event::UpdatedRootRepoCommonDir { .. }) {
+                events_clone
+                    .lock()
+                    .unwrap()
+                    .push("UpdatedRootRepoCommonDir");
+            }
+            if matches!(event, Event::UpdatedEntries(_)) {
+                events_clone.lock().unwrap().push("UpdatedEntries");
+            }
+        })
+        .detach();
+    });
+
+    // Send an update with entries but no repo info (plain directory).
+    worktree.update(cx, |worktree, _cx| {
+        worktree
+            .as_remote()
+            .unwrap()
+            .update_from_remote(proto::UpdateWorktree {
+                project_id: 1,
+                worktree_id: 1,
+                abs_path: "/home/user/project".to_string(),
+                root_name: "project".to_string(),
+                updated_entries: vec![proto::Entry {
+                    id: 1,
+                    is_dir: true,
+                    path: "".to_string(),
+                    inode: 1,
+                    mtime: Some(proto::Timestamp {
+                        seconds: 0,
+                        nanos: 0,
+                    }),
+                    is_ignored: false,
+                    is_hidden: false,
+                    is_external: false,
+                    is_fifo: false,
+                    size: None,
+                    canonical_path: None,
+                }],
+                removed_entries: vec![],
+                scan_id: 1,
+                is_last_update: true,
+                updated_repositories: vec![],
+                removed_repositories: vec![],
+                root_repo_common_dir: None,
+            });
+    });
+
+    cx.run_until_parked();
+
+    let fired = events.lock().unwrap();
+    assert!(
+        fired.contains(&"UpdatedEntries"),
+        "UpdatedEntries should fire after remote update"
+    );
+    assert!(
+        fired.contains(&"UpdatedRootRepoCommonDir"),
+        "UpdatedRootRepoCommonDir should fire after first remote update even when \
+         root_repo_common_dir is None, to signal that repo state is now known"
+    );
+}
+
+#[gpui::test]
+async fn test_remote_worktree_with_git_emits_root_repo_event_when_repo_info_arrives(
+    cx: &mut TestAppContext,
+) {
+    cx.update(|cx| {
+        let store = SettingsStore::test(cx);
+        cx.set_global(store);
+    });
+
+    let client = AnyProtoClient::new(NoopProtoClient::new());
+
+    let worktree = cx.update(|cx| {
+        Worktree::remote(
+            1,
+            clock::ReplicaId::new(1),
+            proto::WorktreeMetadata {
+                id: 1,
+                root_name: "project".to_string(),
+                visible: true,
+                abs_path: "/home/user/project".to_string(),
+                root_repo_common_dir: None,
+            },
+            client,
+            PathStyle::Posix,
+            cx,
+        )
+    });
+
+    let events: Arc<std::sync::Mutex<Vec<&'static str>>> =
+        Arc::new(std::sync::Mutex::new(Vec::new()));
+    let events_clone = events.clone();
+    cx.update(|cx| {
+        cx.subscribe(&worktree, move |_, event, _cx| {
+            if matches!(event, Event::UpdatedRootRepoCommonDir { .. }) {
+                events_clone
+                    .lock()
+                    .unwrap()
+                    .push("UpdatedRootRepoCommonDir");
+            }
+        })
+        .detach();
+    });
+
+    // Send an update where repo info arrives (None -> Some).
+    worktree.update(cx, |worktree, _cx| {
+        worktree
+            .as_remote()
+            .unwrap()
+            .update_from_remote(proto::UpdateWorktree {
+                project_id: 1,
+                worktree_id: 1,
+                abs_path: "/home/user/project".to_string(),
+                root_name: "project".to_string(),
+                updated_entries: vec![proto::Entry {
+                    id: 1,
+                    is_dir: true,
+                    path: "".to_string(),
+                    inode: 1,
+                    mtime: Some(proto::Timestamp {
+                        seconds: 0,
+                        nanos: 0,
+                    }),
+                    is_ignored: false,
+                    is_hidden: false,
+                    is_external: false,
+                    is_fifo: false,
+                    size: None,
+                    canonical_path: None,
+                }],
+                removed_entries: vec![],
+                scan_id: 1,
+                is_last_update: true,
+                updated_repositories: vec![],
+                removed_repositories: vec![],
+                root_repo_common_dir: Some("/home/user/project/.git".to_string()),
+            });
+    });
+
+    cx.run_until_parked();
+
+    let fired = events.lock().unwrap();
+    assert!(
+        fired.contains(&"UpdatedRootRepoCommonDir"),
+        "UpdatedRootRepoCommonDir should fire when repo info arrives (None -> Some)"
+    );
+    assert_eq!(
+        fired
+            .iter()
+            .filter(|e| **e == "UpdatedRootRepoCommonDir")
+            .count(),
+        1,
+        "should fire exactly once, not duplicate"
+    );
+}

crates/zed/src/zed.rs 🔗

@@ -1508,7 +1508,7 @@ fn quit(_: &Quit, cx: &mut App) {
         for window in &workspace_windows {
             let window = *window;
             let workspaces = window
-                .update(cx, |multi_workspace, _, _| {
+                .update(cx, |multi_workspace, _, _cx| {
                     multi_workspace.workspaces().cloned().collect::<Vec<_>>()
                 })
                 .log_err();
@@ -6130,10 +6130,9 @@ mod tests {
     #[gpui::test]
     async fn test_multi_workspace_session_restore(cx: &mut TestAppContext) {
         use collections::HashMap;
-        use project::ProjectGroupKey;
         use session::Session;
         use util::path_list::PathList;
-        use workspace::{OpenMode, Workspace, WorkspaceId};
+        use workspace::{OpenMode, ProjectGroupKey, Workspace, WorkspaceId};
 
         let app_state = init_test(cx);
 
@@ -6324,7 +6323,7 @@ mod tests {
         restored_a
             .read_with(cx, |mw, _| {
                 assert_eq!(
-                    mw.project_group_keys().cloned().collect::<Vec<_>>(),
+                    mw.project_group_keys(),
                     vec![
                         ProjectGroupKey::new(None, PathList::new(&[dir2])),
                         ProjectGroupKey::new(None, PathList::new(&[dir1])),
@@ -6338,11 +6337,168 @@ mod tests {
         restored_b
             .read_with(cx, |mw, _| {
                 assert_eq!(
-                    mw.project_group_keys().cloned().collect::<Vec<_>>(),
+                    mw.project_group_keys(),
                     vec![ProjectGroupKey::new(None, PathList::new(&[dir3]))]
                 );
                 assert_eq!(mw.workspaces().count(), 1);
             })
             .unwrap();
     }
+
+    #[gpui::test]
+    async fn test_restored_project_groups_survive_workspace_key_change(cx: &mut TestAppContext) {
+        use session::Session;
+        use util::path_list::PathList;
+        use workspace::{OpenMode, ProjectGroupKey};
+
+        let app_state = init_test(cx);
+
+        let fs = app_state.fs.clone();
+        let fake_fs = fs.as_fake();
+        fake_fs
+            .insert_tree(path!("/root_a"), json!({ "file.txt": "" }))
+            .await;
+        fake_fs
+            .insert_tree(path!("/root_b"), json!({ "file.txt": "" }))
+            .await;
+        fake_fs
+            .insert_tree(path!("/root_c"), json!({ "file.txt": "" }))
+            .await;
+        fake_fs
+            .insert_tree(path!("/root_d"), json!({ "other.txt": "" }))
+            .await;
+
+        let session_id = cx.read(|cx| app_state.session.read(cx).id().to_owned());
+
+        // --- Phase 1: Build a multi-workspace with 3 project groups ---
+
+        let workspace::OpenResult { window, .. } = cx
+            .update(|cx| {
+                workspace::Workspace::new_local(
+                    vec![path!("/root_a").into()],
+                    app_state.clone(),
+                    None,
+                    None,
+                    None,
+                    OpenMode::Activate,
+                    cx,
+                )
+            })
+            .await
+            .expect("failed to open workspace");
+
+        window.update(cx, |mw, _, cx| mw.open_sidebar(cx)).unwrap();
+
+        window
+            .update(cx, |mw, window, cx| {
+                mw.open_project(vec![path!("/root_b").into()], OpenMode::Add, window, cx)
+            })
+            .unwrap()
+            .await
+            .expect("failed to add root_b");
+
+        window
+            .update(cx, |mw, window, cx| {
+                mw.open_project(vec![path!("/root_c").into()], OpenMode::Add, window, cx)
+            })
+            .unwrap()
+            .await
+            .expect("failed to add root_c");
+        cx.run_until_parked();
+
+        let key_b = ProjectGroupKey::new(None, PathList::new(&[path!("/root_b")]));
+        let key_c = ProjectGroupKey::new(None, PathList::new(&[path!("/root_c")]));
+
+        // Make root_a the active workspace so it's the one eagerly restored.
+        window
+            .update(cx, |mw, window, cx| {
+                let workspace_a = mw
+                    .workspaces()
+                    .find(|ws| {
+                        ws.read(cx)
+                            .root_paths(cx)
+                            .iter()
+                            .any(|p| p.as_ref() == Path::new(path!("/root_a")))
+                    })
+                    .expect("workspace_a should exist")
+                    .clone();
+                mw.activate(workspace_a, window, cx);
+            })
+            .unwrap();
+        cx.run_until_parked();
+
+        // --- Phase 2: Serialize, close, and restore ---
+
+        flush_workspace_serialization(&window, cx).await;
+        cx.run_until_parked();
+
+        window
+            .update(cx, |_, window, _| window.remove_window())
+            .unwrap();
+        cx.run_until_parked();
+
+        cx.update(|cx| {
+            app_state.session.update(cx, |app_session, _cx| {
+                app_session
+                    .replace_session_for_test(Session::test_with_old_session(session_id.clone()));
+            });
+        });
+
+        let mut async_cx = cx.to_async();
+        crate::restore_or_create_workspace(app_state.clone(), &mut async_cx)
+            .await
+            .expect("failed to restore workspace");
+        cx.run_until_parked();
+
+        let restored_windows: Vec<WindowHandle<MultiWorkspace>> = cx.read(|cx| {
+            cx.windows()
+                .into_iter()
+                .filter_map(|w| w.downcast::<MultiWorkspace>())
+                .collect()
+        });
+        assert_eq!(restored_windows.len(), 1);
+        let restored = &restored_windows[0];
+
+        // Verify the restored window has all 3 project groups.
+        restored
+            .read_with(cx, |mw, _cx| {
+                let keys = mw.project_group_keys();
+                assert_eq!(
+                    keys.len(),
+                    3,
+                    "restored window should have 3 groups; got {keys:?}"
+                );
+                assert!(keys.contains(&key_b), "should contain key_b");
+                assert!(keys.contains(&key_c), "should contain key_c");
+            })
+            .unwrap();
+
+        // --- Phase 3: Trigger a workspace key change and verify survival ---
+
+        let active_project = restored
+            .read_with(cx, |mw, cx| mw.workspace().read(cx).project().clone())
+            .unwrap();
+
+        active_project
+            .update(cx, |project, cx| {
+                project.find_or_create_worktree(path!("/root_d"), true, cx)
+            })
+            .await
+            .expect("adding worktree should succeed");
+        cx.run_until_parked();
+
+        restored
+            .read_with(cx, |mw, _cx| {
+                let keys = mw.project_group_keys();
+                assert!(
+                    keys.contains(&key_b),
+                    "restored group key_b should survive a workspace key change; got {keys:?}"
+                );
+                assert!(
+                    keys.contains(&key_c),
+                    "restored group key_c should survive a workspace key change; got {keys:?}"
+                );
+            })
+            .unwrap();
+    }
 }