Add the ability to edit remote directories over SSH (#14530)

Max Brunsfeld and Piotr Osiewicz created

This is a first step towards allowing you to edit remote projects
directly over SSH. We'll start with a pretty bare-bones feature set, and
incrementally add further features.

### Todo

Distribution
* [x] Build nightly releases of `zed-remote-server` binaries
    * [x] linux (arm + x86)
    * [x] mac (arm + x86)
* [x] Build stable + preview releases of `zed-remote-server`
* [x] download and cache remote server binaries as needed when opening
ssh project
* [x] ensure server has the latest version of the binary


Auth
* [x] allow specifying password at the command line
* [x] auth via ssh keys
* [x] UI password prompt

Features
* [x] upload remote server binary to server automatically
* [x] opening directories
* [x] tracking file system updates
* [x] opening, editing, saving buffers
* [ ] file operations (rename, delete, create)
* [ ] git diffs
* [ ] project search

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>

Change summary

.github/workflows/ci.yml                                      |  10 
.github/workflows/release_nightly.yml                         |   4 
Cargo.lock                                                    |  50 
Cargo.toml                                                    |   6 
crates/auto_update/Cargo.toml                                 |   1 
crates/auto_update/src/auto_update.rs                         |  93 
crates/cli/src/main.rs                                        |   1 
crates/collab/Cargo.toml                                      |   2 
crates/collab/src/tests.rs                                    |   1 
crates/collab/src/tests/remote_editing_collaboration_tests.rs | 102 
crates/collab/src/tests/test_server.rs                        |  25 
crates/copilot/src/copilot.rs                                 |   2 
crates/editor/src/editor.rs                                   |  11 
crates/fs/src/fs.rs                                           |  10 
crates/gpui/src/app.rs                                        |  13 
crates/gpui/src/platform.rs                                   |  13 
crates/gpui/src/platform/mac/events.rs                        |  26 
crates/gpui/src/platform/mac/platform.rs                      |  19 
crates/language/src/buffer.rs                                 |   8 
crates/language/src/buffer_tests.rs                           |   6 
crates/lsp/Cargo.toml                                         |   4 
crates/multi_buffer/src/multi_buffer.rs                       |   2 
crates/paths/src/paths.rs                                     |  18 
crates/project/Cargo.toml                                     |   1 
crates/project/src/buffer_store.rs                            |   2 
crates/project/src/project.rs                                 | 103 
crates/proto/proto/zed.proto                                  |  14 
crates/proto/src/macros.rs                                    |   2 
crates/proto/src/proto.rs                                     |   7 
crates/remote/Cargo.toml                                      |  36 
crates/remote/LICENSE-GPL                                     |   1 
crates/remote/src/protocol.rs                                 |  51 
crates/remote/src/remote.rs                                   |   4 
crates/remote/src/ssh_session.rs                              | 643 +++++
crates/remote_server/Cargo.toml                               |  51 
crates/remote_server/LICENSE-GPL                              |   1 
crates/remote_server/build.rs                                 |  10 
crates/remote_server/src/headless_project.rs                  | 166 +
crates/remote_server/src/main.rs                              |  78 
crates/remote_server/src/remote_editing_tests.rs              | 134 +
crates/remote_server/src/remote_server.rs                     |   6 
crates/worktree/src/worktree.rs                               | 282 +
crates/zed/Cargo.toml                                         |   2 
crates/zed/src/main.rs                                        |  22 
crates/zed/src/zed.rs                                         |   1 
crates/zed/src/zed/open_listener.rs                           | 207 +
crates/zed/src/zed/password_prompt.rs                         |  69 
script/bundle-linux                                           |   8 
script/bundle-mac                                             | 104 
script/upload-nightly                                         |  12 
50 files changed, 2,194 insertions(+), 250 deletions(-)

Detailed changes

.github/workflows/ci.yml 🔗

@@ -251,6 +251,8 @@ jobs:
           draft: true
           prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
           files: |
+            target/zed-remote-server-mac-x86_64.gz
+            target/zed-remote-server-mac-aarch64.gz
             target/aarch64-apple-darwin/release/Zed-aarch64.dmg
             target/x86_64-apple-darwin/release/Zed-x86_64.dmg
             target/release/Zed.dmg
@@ -322,7 +324,9 @@ jobs:
         with:
           draft: true
           prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
-          files: target/release/zed-linux-x86_64.tar.gz
+          files: |
+            target/zed-remote-server-linux-x86_64.gz
+            target/release/zed-linux-x86_64.tar.gz
           body: ""
         env:
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -405,7 +409,9 @@ jobs:
         with:
           draft: true
           prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
-          files: target/release/zed-linux-aarch64.tar.gz
+          files: |
+            target/zed-remote-server-linux-aarch64.gz
+            target/release/zed-linux-aarch64.tar.gz
           body: ""
         env:
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

.github/workflows/release_nightly.yml 🔗

@@ -33,6 +33,7 @@ jobs:
 
       - name: Run clippy
         run: ./script/clippy
+
   tests:
     timeout-minutes: 60
     name: Run tests
@@ -90,6 +91,9 @@ jobs:
       - name: Create macOS app bundle
         run: script/bundle-mac
 
+      - name: Build macOS remote server binaries
+        run: script/build-remote-server x86_64 aarch64
+
       - name: Upload Zed Nightly
         run: script/upload-nightly macos
 

Cargo.lock 🔗

@@ -984,6 +984,7 @@ dependencies = [
  "log",
  "markdown_preview",
  "menu",
+ "paths",
  "release_channel",
  "schemars",
  "serde",
@@ -2526,6 +2527,8 @@ dependencies = [
  "rand 0.8.5",
  "recent_projects",
  "release_channel",
+ "remote",
+ "remote_server",
  "reqwest",
  "rpc",
  "rustc-demangle",
@@ -8076,6 +8079,7 @@ dependencies = [
  "rand 0.8.5",
  "regex",
  "release_channel",
+ "remote",
  "rpc",
  "schemars",
  "serde",
@@ -8673,6 +8677,50 @@ dependencies = [
  "once_cell",
 ]
 
+[[package]]
+name = "remote"
+version = "0.1.0"
+dependencies = [
+ "anyhow",
+ "collections",
+ "fs",
+ "futures 0.3.28",
+ "gpui",
+ "log",
+ "parking_lot",
+ "prost",
+ "rpc",
+ "smol",
+ "tempfile",
+ "util",
+]
+
+[[package]]
+name = "remote_server"
+version = "0.1.0"
+dependencies = [
+ "anyhow",
+ "cargo_toml",
+ "client",
+ "clock",
+ "env_logger",
+ "fs",
+ "futures 0.3.28",
+ "gpui",
+ "http 0.1.0",
+ "language",
+ "log",
+ "node_runtime",
+ "project",
+ "remote",
+ "rpc",
+ "serde_json",
+ "settings",
+ "smol",
+ "toml 0.8.10",
+ "worktree",
+]
+
 [[package]]
 name = "rend"
 version = "0.4.0"
@@ -13700,6 +13748,7 @@ dependencies = [
  "quick_action_bar",
  "recent_projects",
  "release_channel",
+ "remote",
  "repl",
  "rope",
  "search",
@@ -13720,6 +13769,7 @@ dependencies = [
  "tree-sitter-markdown",
  "tree-sitter-rust",
  "ui",
+ "url",
  "urlencoding",
  "util",
  "uuid",

Cargo.toml 🔗

@@ -79,6 +79,8 @@ members = [
     "crates/refineable",
     "crates/refineable/derive_refineable",
     "crates/release_channel",
+    "crates/remote",
+    "crates/remote_server",
     "crates/repl",
     "crates/rich_text",
     "crates/rope",
@@ -160,6 +162,7 @@ assets = { path = "crates/assets" }
 assistant = { path = "crates/assistant" }
 assistant_slash_command = { path = "crates/assistant_slash_command" }
 assistant_tooling = { path = "crates/assistant_tooling" }
+async-pipe = { git = "https://github.com/zed-industries/async-pipe-rs", rev = "82d00a04211cf4e1236029aa03e6b6ce2a74c553" }
 audio = { path = "crates/audio" }
 auto_update = { path = "crates/auto_update" }
 breadcrumbs = { path = "crates/breadcrumbs" }
@@ -231,6 +234,8 @@ proto = { path = "crates/proto" }
 quick_action_bar = { path = "crates/quick_action_bar" }
 recent_projects = { path = "crates/recent_projects" }
 release_channel = { path = "crates/release_channel" }
+remote = { path = "crates/remote" }
+remote_server = { path = "crates/remote_server" }
 repl = { path = "crates/repl" }
 rich_text = { path = "crates/rich_text" }
 rope = { path = "crates/rope" }
@@ -381,6 +386,7 @@ time = { version = "0.3", features = [
     "serde-well-known",
     "formatting",
 ] }
+tiny_http = "0.8"
 toml = "0.8"
 tokio = { version = "1", features = ["full"] }
 tower-http = "0.4.4"

crates/auto_update/Cargo.toml 🔗

@@ -23,6 +23,7 @@ isahc.workspace = true
 log.workspace = true
 markdown_preview.workspace = true
 menu.workspace = true
+paths.workspace = true
 release_channel.workspace = true
 schemars.workspace = true
 serde.workspace = true

crates/auto_update/src/auto_update.rs 🔗

@@ -359,7 +359,6 @@ impl AutoUpdater {
             return;
         }
 
-        self.status = AutoUpdateStatus::Checking;
         cx.notify();
 
         self.pending_poll = Some(cx.spawn(|this, mut cx| async move {
@@ -385,20 +384,51 @@ impl AutoUpdater {
         cx.notify();
     }
 
-    async fn update(this: Model<Self>, mut cx: AsyncAppContext) -> Result<()> {
-        let (client, current_version) = this.read_with(&cx, |this, _| {
-            (this.http_client.clone(), this.current_version)
-        })?;
+    pub async fn get_latest_remote_server_release(
+        os: &str,
+        arch: &str,
+        mut release_channel: ReleaseChannel,
+        cx: &mut AsyncAppContext,
+    ) -> Result<PathBuf> {
+        let this = cx.update(|cx| {
+            cx.default_global::<GlobalAutoUpdate>()
+                .0
+                .clone()
+                .ok_or_else(|| anyhow!("auto-update not initialized"))
+        })??;
 
-        let asset = match OS {
-            "linux" => format!("zed-linux-{}.tar.gz", ARCH),
-            "macos" => "Zed.dmg".into(),
-            _ => return Err(anyhow!("auto-update not supported for OS {:?}", OS)),
-        };
+        if release_channel == ReleaseChannel::Dev {
+            release_channel = ReleaseChannel::Nightly;
+        }
 
+        let release = Self::get_latest_release(&this, "zed-remote-server", os, arch, cx).await?;
+
+        let servers_dir = paths::remote_servers_dir();
+        let channel_dir = servers_dir.join(release_channel.dev_name());
+        let platform_dir = channel_dir.join(format!("{}-{}", os, arch));
+        let version_path = platform_dir.join(format!("{}.gz", release.version));
+        smol::fs::create_dir_all(&platform_dir).await.ok();
+
+        let client = this.read_with(cx, |this, _| this.http_client.clone())?;
+        if smol::fs::metadata(&version_path).await.is_err() {
+            log::info!("downloading zed-remote-server {os} {arch}");
+            download_remote_server_binary(&version_path, release, client, cx).await?;
+        }
+
+        Ok(version_path)
+    }
+
+    async fn get_latest_release(
+        this: &Model<Self>,
+        asset: &str,
+        os: &str,
+        arch: &str,
+        cx: &mut AsyncAppContext,
+    ) -> Result<JsonRelease> {
+        let client = this.read_with(cx, |this, _| this.http_client.clone())?;
         let mut url_string = client.build_url(&format!(
             "/api/releases/latest?asset={}&os={}&arch={}",
-            asset, OS, ARCH
+            asset, os, arch
         ));
         cx.update(|cx| {
             if let Some(param) = ReleaseChannel::try_global(cx)
@@ -418,8 +448,17 @@ impl AutoUpdater {
             .await
             .context("error reading release")?;
 
-        let release: JsonRelease =
-            serde_json::from_slice(body.as_slice()).context("error deserializing release")?;
+        serde_json::from_slice(body.as_slice()).context("error deserializing release")
+    }
+
+    async fn update(this: Model<Self>, mut cx: AsyncAppContext) -> Result<()> {
+        let (client, current_version) = this.update(&mut cx, |this, cx| {
+            this.status = AutoUpdateStatus::Checking;
+            cx.notify();
+            (this.http_client.clone(), this.current_version)
+        })?;
+
+        let release = Self::get_latest_release(&this, "zed", OS, ARCH, &mut cx).await?;
 
         let should_download = match *RELEASE_CHANNEL {
             ReleaseChannel::Nightly => cx
@@ -446,7 +485,7 @@ impl AutoUpdater {
         let temp_dir = tempfile::Builder::new()
             .prefix("zed-auto-update")
             .tempdir()?;
-        let downloaded_asset = download_release(&temp_dir, release, &asset, client, &cx).await?;
+        let downloaded_asset = download_release(&temp_dir, release, "zed", client, &cx).await?;
 
         this.update(&mut cx, |this, cx| {
             this.status = AutoUpdateStatus::Installing;
@@ -500,6 +539,32 @@ impl AutoUpdater {
     }
 }
 
+async fn download_remote_server_binary(
+    target_path: &PathBuf,
+    release: JsonRelease,
+    client: Arc<HttpClientWithUrl>,
+    cx: &AsyncAppContext,
+) -> Result<()> {
+    let mut target_file = File::create(&target_path).await?;
+    let (installation_id, release_channel, telemetry) = cx.update(|cx| {
+        let installation_id = Client::global(cx).telemetry().installation_id();
+        let release_channel =
+            ReleaseChannel::try_global(cx).map(|release_channel| release_channel.display_name());
+        let telemetry = TelemetrySettings::get_global(cx).metrics;
+
+        (installation_id, release_channel, telemetry)
+    })?;
+    let request_body = AsyncBody::from(serde_json::to_string(&UpdateRequestBody {
+        installation_id,
+        release_channel,
+        telemetry,
+    })?);
+
+    let mut response = client.get(&release.url, request_body, true).await?;
+    smol::io::copy(response.body_mut(), &mut target_file).await?;
+    Ok(())
+}
+
 async fn download_release(
     temp_dir: &tempfile::TempDir,
     release: JsonRelease,

crates/cli/src/main.rs 🔗

@@ -129,6 +129,7 @@ fn main() -> Result<()> {
             || path.starts_with("http://")
             || path.starts_with("https://")
             || path.starts_with("file://")
+            || path.starts_with("ssh://")
         {
             urls.push(path.to_string());
         } else {

crates/collab/Cargo.toml 🔗

@@ -99,6 +99,8 @@ pretty_assertions.workspace = true
 project = { workspace = true, features = ["test-support"] }
 recent_projects = { workspace = true }
 release_channel.workspace = true
+remote = { workspace = true, features = ["test-support"] }
+remote_server.workspace = true
 dev_server_projects.workspace = true
 rpc = { workspace = true, features = ["test-support"] }
 sea-orm = { version = "0.12.x", features = ["sqlx-sqlite"] }

crates/collab/src/tests.rs 🔗

@@ -16,6 +16,7 @@ mod notification_tests;
 mod random_channel_buffer_tests;
 mod random_project_collaboration_tests;
 mod randomized_test_helpers;
+mod remote_editing_collaboration_tests;
 mod test_server;
 
 use language::{tree_sitter_rust, Language, LanguageConfig, LanguageMatcher};

crates/collab/src/tests/remote_editing_collaboration_tests.rs 🔗

@@ -0,0 +1,102 @@
+use crate::tests::TestServer;
+use call::ActiveCall;
+use fs::{FakeFs, Fs as _};
+use gpui::{Context as _, TestAppContext};
+use remote::SshSession;
+use remote_server::HeadlessProject;
+use serde_json::json;
+use std::{path::Path, sync::Arc};
+
+#[gpui::test]
+async fn test_sharing_an_ssh_remote_project(
+    cx_a: &mut TestAppContext,
+    cx_b: &mut TestAppContext,
+    server_cx: &mut TestAppContext,
+) {
+    let executor = cx_a.executor();
+    let mut server = TestServer::start(executor.clone()).await;
+    let client_a = server.create_client(cx_a, "user_a").await;
+    let client_b = server.create_client(cx_b, "user_b").await;
+    server
+        .create_room(&mut [(&client_a, cx_a), (&client_b, cx_b)])
+        .await;
+
+    // Set up project on remote FS
+    let (client_ssh, server_ssh) = SshSession::fake(cx_a, server_cx);
+    let remote_fs = FakeFs::new(server_cx.executor());
+    remote_fs
+        .insert_tree(
+            "/code",
+            json!({
+                "project1": {
+                    "README.md": "# project 1",
+                    "src": {
+                        "lib.rs": "fn one() -> usize { 1 }"
+                    }
+                },
+                "project2": {
+                    "README.md": "# project 2",
+                },
+            }),
+        )
+        .await;
+
+    // User A connects to the remote project via SSH.
+    server_cx.update(HeadlessProject::init);
+    let _headless_project =
+        server_cx.new_model(|cx| HeadlessProject::new(server_ssh, remote_fs.clone(), cx));
+
+    let (project_a, worktree_id) = client_a
+        .build_ssh_project("/code/project1", client_ssh, cx_a)
+        .await;
+
+    // User A shares the remote project.
+    let active_call_a = cx_a.read(ActiveCall::global);
+    let project_id = active_call_a
+        .update(cx_a, |call, cx| call.share_project(project_a.clone(), cx))
+        .await
+        .unwrap();
+
+    // User B joins the project.
+    let project_b = client_b.build_dev_server_project(project_id, cx_b).await;
+    let worktree_b = project_b
+        .update(cx_b, |project, cx| project.worktree_for_id(worktree_id, cx))
+        .unwrap();
+
+    executor.run_until_parked();
+    worktree_b.update(cx_b, |worktree, _cx| {
+        assert_eq!(
+            worktree.paths().map(Arc::as_ref).collect::<Vec<_>>(),
+            vec![
+                Path::new("README.md"),
+                Path::new("src"),
+                Path::new("src/lib.rs"),
+            ]
+        );
+    });
+
+    // User B can open buffers in the remote project.
+    let buffer_b = project_b
+        .update(cx_b, |project, cx| {
+            project.open_buffer((worktree_id, "src/lib.rs"), cx)
+        })
+        .await
+        .unwrap();
+    buffer_b.update(cx_b, |buffer, cx| {
+        assert_eq!(buffer.text(), "fn one() -> usize { 1 }");
+        let ix = buffer.text().find('1').unwrap();
+        buffer.edit([(ix..ix + 1, "100")], None, cx);
+    });
+
+    project_b
+        .update(cx_b, |project, cx| project.save_buffer(buffer_b, cx))
+        .await
+        .unwrap();
+    assert_eq!(
+        remote_fs
+            .load("/code/project1/src/lib.rs".as_ref())
+            .await
+            .unwrap(),
+        "fn one() -> usize { 100 }"
+    );
+}

crates/collab/src/tests/test_server.rs 🔗

@@ -25,6 +25,7 @@ use node_runtime::FakeNodeRuntime;
 use notifications::NotificationStore;
 use parking_lot::Mutex;
 use project::{Project, WorktreeId};
+use remote::SshSession;
 use rpc::{
     proto::{self, ChannelRole},
     RECEIVE_TIMEOUT,
@@ -814,6 +815,30 @@ impl TestClient {
         (project, worktree.read_with(cx, |tree, _| tree.id()))
     }
 
+    pub async fn build_ssh_project(
+        &self,
+        root_path: impl AsRef<Path>,
+        ssh: Arc<SshSession>,
+        cx: &mut TestAppContext,
+    ) -> (Model<Project>, WorktreeId) {
+        let project = cx.update(|cx| {
+            Project::ssh(
+                ssh,
+                self.client().clone(),
+                self.app_state.node_runtime.clone(),
+                self.app_state.user_store.clone(),
+                self.app_state.languages.clone(),
+                self.app_state.fs.clone(),
+                cx,
+            )
+        });
+        let (worktree, _) = project
+            .update(cx, |p, cx| p.find_or_create_worktree(root_path, true, cx))
+            .await
+            .unwrap();
+        (project, worktree.read_with(cx, |tree, _| tree.id()))
+    }
+
     pub async fn build_test_project(&self, cx: &mut TestAppContext) -> Model<Project> {
         self.fs()
             .insert_tree(

crates/copilot/src/copilot.rs 🔗

@@ -1236,7 +1236,7 @@ mod tests {
             unimplemented!()
         }
 
-        fn to_proto(&self) -> rpc::proto::File {
+        fn to_proto(&self, _: &AppContext) -> rpc::proto::File {
             unimplemented!()
         }
 

crates/editor/src/editor.rs 🔗

@@ -488,6 +488,7 @@ pub struct Editor {
     mode: EditorMode,
     show_breadcrumbs: bool,
     show_gutter: bool,
+    redact_all: bool,
     show_line_numbers: Option<bool>,
     show_git_diff_gutter: Option<bool>,
     show_code_actions: Option<bool>,
@@ -1822,6 +1823,7 @@ impl Editor {
             show_code_actions: None,
             show_runnables: None,
             show_wrap_guides: None,
+            redact_all: false,
             show_indent_guides,
             placeholder_text: None,
             highlight_order: 0,
@@ -10414,6 +10416,11 @@ impl Editor {
         cx.notify();
     }
 
+    pub fn set_redact_all(&mut self, redact_all: bool, cx: &mut ViewContext<Self>) {
+        self.redact_all = redact_all;
+        cx.notify();
+    }
+
     pub fn set_show_wrap_guides(&mut self, show_wrap_guides: bool, cx: &mut ViewContext<Self>) {
         self.show_wrap_guides = Some(show_wrap_guides);
         cx.notify();
@@ -11108,6 +11115,10 @@ impl Editor {
         display_snapshot: &DisplaySnapshot,
         cx: &WindowContext,
     ) -> Vec<Range<DisplayPoint>> {
+        if self.redact_all {
+            return vec![DisplayPoint::zero()..display_snapshot.max_point()];
+        }
+
         display_snapshot
             .buffer_snapshot
             .redacted_ranges(search_range, |file| {

crates/fs/src/fs.rs 🔗

@@ -95,8 +95,11 @@ pub trait Fs: Send + Sync {
     fn open_repo(&self, abs_dot_git: &Path) -> Option<Arc<dyn GitRepository>>;
     fn is_fake(&self) -> bool;
     async fn is_case_sensitive(&self) -> Result<bool>;
+
     #[cfg(any(test, feature = "test-support"))]
-    fn as_fake(&self) -> &FakeFs;
+    fn as_fake(&self) -> &FakeFs {
+        panic!("called as_fake on a real fs");
+    }
 }
 
 #[derive(Copy, Clone, Default)]
@@ -602,11 +605,6 @@ impl Fs for RealFs {
         temp_dir.close()?;
         case_sensitive
     }
-
-    #[cfg(any(test, feature = "test-support"))]
-    fn as_fake(&self) -> &FakeFs {
-        panic!("called `RealFs::as_fake`")
-    }
 }
 
 #[cfg(not(target_os = "linux"))]

crates/gpui/src/app.rs 🔗

@@ -112,7 +112,18 @@ impl App {
         log::info!("GPUI was compiled in test mode");
 
         Self(AppContext::new(
-            current_platform(),
+            current_platform(false),
+            Arc::new(()),
+            http::client(None),
+        ))
+    }
+
+    /// Build an app in headless mode. This prevents opening windows,
+    /// but makes it possible to run an application in an context like
+    /// SSH, where GUI applications are not allowed.
+    pub fn headless() -> Self {
+        Self(AppContext::new(
+            current_platform(true),
             Arc::new(()),
             http::client(None),
         ))

crates/gpui/src/platform.rs 🔗

@@ -64,11 +64,16 @@ pub(crate) use test::*;
 pub(crate) use windows::*;
 
 #[cfg(target_os = "macos")]
-pub(crate) fn current_platform() -> Rc<dyn Platform> {
-    Rc::new(MacPlatform::new())
+pub(crate) fn current_platform(headless: bool) -> Rc<dyn Platform> {
+    Rc::new(MacPlatform::new(headless))
 }
+
 #[cfg(target_os = "linux")]
-pub(crate) fn current_platform() -> Rc<dyn Platform> {
+pub(crate) fn current_platform(headless: bool) -> Rc<dyn Platform> {
+    if headless {
+        return Rc::new(HeadlessClient::new());
+    }
+
     match guess_compositor() {
         "Wayland" => Rc::new(WaylandClient::new()),
         "X11" => Rc::new(X11Client::new()),
@@ -99,7 +104,7 @@ pub fn guess_compositor() -> &'static str {
 
 // todo("windows")
 #[cfg(target_os = "windows")]
-pub(crate) fn current_platform() -> Rc<dyn Platform> {
+pub(crate) fn current_platform(_headless: bool) -> Rc<dyn Platform> {
     Rc::new(WindowsPlatform::new())
 }
 

crates/gpui/src/platform/mac/events.rs 🔗

@@ -12,10 +12,9 @@ use core_graphics::{
     event::{CGEvent, CGEventFlags, CGKeyCode},
     event_source::{CGEventSource, CGEventSourceStateID},
 };
-use ctor::ctor;
 use metal::foreign_types::ForeignType as _;
 use objc::{class, msg_send, sel, sel_impl};
-use std::{borrow::Cow, mem, ptr};
+use std::{borrow::Cow, mem, ptr, sync::Once};
 
 const BACKSPACE_KEY: u16 = 0x7f;
 const SPACE_KEY: u16 = b' ' as u16;
@@ -25,13 +24,22 @@ const ESCAPE_KEY: u16 = 0x1b;
 const TAB_KEY: u16 = 0x09;
 const SHIFT_TAB_KEY: u16 = 0x19;
 
-static mut EVENT_SOURCE: core_graphics::sys::CGEventSourceRef = ptr::null_mut();
+fn synthesize_keyboard_event(code: CGKeyCode) -> CGEvent {
+    static mut EVENT_SOURCE: core_graphics::sys::CGEventSourceRef = ptr::null_mut();
+    static INIT_EVENT_SOURCE: Once = Once::new();
 
-#[ctor]
-unsafe fn build_event_source() {
-    let source = CGEventSource::new(CGEventSourceStateID::Private).unwrap();
-    EVENT_SOURCE = source.as_ptr();
+    INIT_EVENT_SOURCE.call_once(|| {
+        let source = CGEventSource::new(CGEventSourceStateID::Private).unwrap();
+        unsafe {
+            EVENT_SOURCE = source.as_ptr();
+        };
+        mem::forget(source);
+    });
+
+    let source = unsafe { core_graphics::event_source::CGEventSource::from_ptr(EVENT_SOURCE) };
+    let event = CGEvent::new_keyboard_event(source.clone(), code, true).unwrap();
     mem::forget(source);
+    event
 }
 
 pub fn key_to_native(key: &str) -> Cow<str> {
@@ -335,9 +343,7 @@ fn chars_for_modified_key(code: CGKeyCode, cmd: bool, shift: bool) -> String {
     // always returns an empty string with certain keyboards, e.g. Japanese. Synthesizing
     // an event with the given flags instead lets us access `characters`, which always
     // returns a valid string.
-    let source = unsafe { core_graphics::event_source::CGEventSource::from_ptr(EVENT_SOURCE) };
-    let event = CGEvent::new_keyboard_event(source.clone(), code, true).unwrap();
-    mem::forget(source);
+    let event = synthesize_keyboard_event(code);
 
     let mut flags = CGEventFlags::empty();
     if cmd {

crates/gpui/src/platform/mac/platform.rs 🔗

@@ -24,6 +24,7 @@ use core_foundation::{
     boolean::CFBoolean,
     data::CFData,
     dictionary::{CFDictionary, CFDictionaryRef, CFMutableDictionary},
+    runloop::CFRunLoopRun,
     string::{CFString, CFStringRef},
 };
 use ctor::ctor;
@@ -139,6 +140,7 @@ pub(crate) struct MacPlatformState {
     foreground_executor: ForegroundExecutor,
     text_system: Arc<MacTextSystem>,
     renderer_context: renderer::Context,
+    headless: bool,
     pasteboard: id,
     text_hash_pasteboard_type: id,
     metadata_pasteboard_type: id,
@@ -155,15 +157,16 @@ pub(crate) struct MacPlatformState {
 
 impl Default for MacPlatform {
     fn default() -> Self {
-        Self::new()
+        Self::new(false)
     }
 }
 
 impl MacPlatform {
-    pub(crate) fn new() -> Self {
+    pub(crate) fn new(headless: bool) -> Self {
         let dispatcher = Arc::new(MacDispatcher::new());
         Self(Mutex::new(MacPlatformState {
             background_executor: BackgroundExecutor::new(dispatcher.clone()),
+            headless,
             foreground_executor: ForegroundExecutor::new(dispatcher),
             text_system: Arc::new(MacTextSystem::new()),
             renderer_context: renderer::Context::default(),
@@ -394,7 +397,15 @@ impl Platform for MacPlatform {
     }
 
     fn run(&self, on_finish_launching: Box<dyn FnOnce()>) {
-        self.0.lock().finish_launching = Some(on_finish_launching);
+        let mut state = self.0.lock();
+        if state.headless {
+            drop(state);
+            on_finish_launching();
+            unsafe { CFRunLoopRun() };
+        } else {
+            state.finish_launching = Some(on_finish_launching);
+            drop(state);
+        }
 
         unsafe {
             let app: id = msg_send![APP_CLASS, sharedApplication];
@@ -1238,7 +1249,7 @@ mod tests {
     }
 
     fn build_platform() -> MacPlatform {
-        let platform = MacPlatform::new();
+        let platform = MacPlatform::new(false);
         platform.0.lock().pasteboard = unsafe { NSPasteboard::pasteboardWithUniqueName(nil) };
         platform
     }

crates/language/src/buffer.rs 🔗

@@ -372,7 +372,7 @@ pub trait File: Send + Sync {
     fn as_any(&self) -> &dyn Any;
 
     /// Converts this file into a protobuf message.
-    fn to_proto(&self) -> rpc::proto::File;
+    fn to_proto(&self, cx: &AppContext) -> rpc::proto::File;
 
     /// Return whether Zed considers this to be a private file.
     fn is_private(&self) -> bool;
@@ -612,10 +612,10 @@ impl Buffer {
     }
 
     /// Serialize the buffer's state to a protobuf message.
-    pub fn to_proto(&self) -> proto::BufferState {
+    pub fn to_proto(&self, cx: &AppContext) -> proto::BufferState {
         proto::BufferState {
             id: self.remote_id().into(),
-            file: self.file.as_ref().map(|f| f.to_proto()),
+            file: self.file.as_ref().map(|f| f.to_proto(cx)),
             base_text: self.base_text().to_string(),
             diff_base: self.diff_base.as_ref().map(|h| h.to_string()),
             line_ending: proto::serialize_line_ending(self.line_ending()) as i32,
@@ -3940,7 +3940,7 @@ impl File for TestFile {
         unimplemented!()
     }
 
-    fn to_proto(&self) -> rpc::proto::File {
+    fn to_proto(&self, _: &AppContext) -> rpc::proto::File {
         unimplemented!()
     }
 

crates/language/src/buffer_tests.rs 🔗

@@ -2038,7 +2038,7 @@ fn test_serialization(cx: &mut gpui::AppContext) {
     });
     assert_eq!(buffer1.read(cx).text(), "abcDF");
 
-    let state = buffer1.read(cx).to_proto();
+    let state = buffer1.read(cx).to_proto(cx);
     let ops = cx
         .background_executor()
         .block(buffer1.read(cx).serialize_ops(None, cx));
@@ -2165,7 +2165,7 @@ fn test_random_collaboration(cx: &mut AppContext, mut rng: StdRng) {
 
     for i in 0..rng.gen_range(min_peers..=max_peers) {
         let buffer = cx.new_model(|cx| {
-            let state = base_buffer.read(cx).to_proto();
+            let state = base_buffer.read(cx).to_proto(cx);
             let ops = cx
                 .background_executor()
                 .block(base_buffer.read(cx).serialize_ops(None, cx));
@@ -2272,7 +2272,7 @@ fn test_random_collaboration(cx: &mut AppContext, mut rng: StdRng) {
                 mutation_count -= 1;
             }
             50..=59 if replica_ids.len() < max_peers => {
-                let old_buffer_state = buffer.read(cx).to_proto();
+                let old_buffer_state = buffer.read(cx).to_proto(cx);
                 let old_buffer_ops = cx
                     .background_executor()
                     .block(buffer.read(cx).serialize_ops(None, cx));

crates/lsp/Cargo.toml 🔗

@@ -17,7 +17,7 @@ test-support = ["async-pipe"]
 
 [dependencies]
 anyhow.workspace = true
-async-pipe = { git = "https://github.com/zed-industries/async-pipe-rs", rev = "82d00a04211cf4e1236029aa03e6b6ce2a74c553", optional = true }
+async-pipe = { workspace = true, optional = true }
 collections.workspace = true
 futures.workspace = true
 gpui.workspace = true
@@ -35,7 +35,7 @@ release_channel.workspace = true
 windows.workspace = true
 
 [dev-dependencies]
-async-pipe = { git = "https://github.com/zed-industries/async-pipe-rs", rev = "82d00a04211cf4e1236029aa03e6b6ce2a74c553" }
+async-pipe.workspace = true
 ctor.workspace = true
 env_logger.workspace = true
 gpui = { workspace = true, features = ["test-support"] }

crates/multi_buffer/src/multi_buffer.rs 🔗

@@ -4807,7 +4807,7 @@ mod tests {
     fn test_remote(cx: &mut AppContext) {
         let host_buffer = cx.new_model(|cx| Buffer::local("a", cx));
         let guest_buffer = cx.new_model(|cx| {
-            let state = host_buffer.read(cx).to_proto();
+            let state = host_buffer.read(cx).to_proto(cx);
             let ops = cx
                 .background_executor()
                 .block(host_buffer.read(cx).serialize_ops(None, cx));

crates/paths/src/paths.rs 🔗

@@ -224,26 +224,28 @@ pub fn default_prettier_dir() -> &'static PathBuf {
     DEFAULT_PRETTIER_DIR.get_or_init(|| support_dir().join("prettier"))
 }
 
+/// Returns the path to the remote server binaries directory.
+pub fn remote_servers_dir() -> &'static PathBuf {
+    static REMOTE_SERVERS_DIR: OnceLock<PathBuf> = OnceLock::new();
+    REMOTE_SERVERS_DIR.get_or_init(|| support_dir().join("remote_servers"))
+}
+
 /// Returns the relative path to a `.zed` folder within a project.
 pub fn local_settings_folder_relative_path() -> &'static Path {
-    static LOCAL_SETTINGS_FOLDER_RELATIVE_PATH: OnceLock<&Path> = OnceLock::new();
-    LOCAL_SETTINGS_FOLDER_RELATIVE_PATH.get_or_init(|| Path::new(".zed"))
+    Path::new(".zed")
 }
 
 /// Returns the relative path to a `settings.json` file within a project.
 pub fn local_settings_file_relative_path() -> &'static Path {
-    static LOCAL_SETTINGS_FILE_RELATIVE_PATH: OnceLock<&Path> = OnceLock::new();
-    LOCAL_SETTINGS_FILE_RELATIVE_PATH.get_or_init(|| Path::new(".zed/settings.json"))
+    Path::new(".zed/settings.json")
 }
 
 /// Returns the relative path to a `tasks.json` file within a project.
 pub fn local_tasks_file_relative_path() -> &'static Path {
-    static LOCAL_TASKS_FILE_RELATIVE_PATH: OnceLock<&Path> = OnceLock::new();
-    LOCAL_TASKS_FILE_RELATIVE_PATH.get_or_init(|| Path::new(".zed/tasks.json"))
+    Path::new(".zed/tasks.json")
 }
 
 /// Returns the relative path to a `.vscode/tasks.json` file within a project.
 pub fn local_vscode_tasks_file_relative_path() -> &'static Path {
-    static LOCAL_VSCODE_TASKS_FILE_RELATIVE_PATH: OnceLock<&Path> = OnceLock::new();
-    LOCAL_VSCODE_TASKS_FILE_RELATIVE_PATH.get_or_init(|| Path::new(".vscode/tasks.json"))
+    Path::new(".vscode/tasks.json")
 }

crates/project/Cargo.toml 🔗

@@ -51,6 +51,7 @@ prettier.workspace = true
 worktree.workspace = true
 rand.workspace = true
 regex.workspace = true
+remote.workspace = true
 rpc.workspace = true
 schemars.workspace = true
 task.workspace = true

crates/project/src/buffer_store.rs 🔗

@@ -705,7 +705,7 @@ impl BufferStore {
 
         let operations = buffer.update(cx, |b, cx| b.serialize_ops(None, cx))?;
         let operations = operations.await;
-        let state = buffer.update(cx, |buffer, _| buffer.to_proto())?;
+        let state = buffer.update(cx, |buffer, cx| buffer.to_proto(cx))?;
 
         let initial_state = proto::CreateBufferForPeer {
             project_id,

crates/project/src/project.rs 🔗

@@ -74,15 +74,18 @@ use postage::watch;
 use prettier_support::{DefaultPrettier, PrettierInstance};
 use project_settings::{DirenvSettings, LspSettings, ProjectSettings};
 use rand::prelude::*;
-use rpc::ErrorCode;
+use remote::SshSession;
+use rpc::{proto::AddWorktree, ErrorCode};
 use search::SearchQuery;
 use search_history::SearchHistory;
 use serde::Serialize;
 use settings::{watch_config_file, Settings, SettingsLocation, SettingsStore};
 use sha2::{Digest, Sha256};
 use similar::{ChangeTag, TextDiff};
-use smol::channel::{Receiver, Sender};
-use smol::lock::Semaphore;
+use smol::{
+    channel::{Receiver, Sender},
+    lock::Semaphore,
+};
 use snippet::Snippet;
 use snippet_provider::SnippetProvider;
 use std::{
@@ -196,6 +199,7 @@ pub struct Project {
     >,
     user_store: Model<UserStore>,
     fs: Arc<dyn Fs>,
+    ssh_session: Option<Arc<SshSession>>,
     client_state: ProjectClientState,
     collaborators: HashMap<proto::PeerId, Collaborator>,
     client_subscriptions: Vec<client::Subscription>,
@@ -793,6 +797,7 @@ impl Project {
                 client,
                 user_store,
                 fs,
+                ssh_session: None,
                 next_entry_id: Default::default(),
                 next_diagnostic_group_id: Default::default(),
                 diagnostics: Default::default(),
@@ -825,6 +830,24 @@ impl Project {
         })
     }
 
+    pub fn ssh(
+        ssh_session: Arc<SshSession>,
+        client: Arc<Client>,
+        node: Arc<dyn NodeRuntime>,
+        user_store: Model<UserStore>,
+        languages: Arc<LanguageRegistry>,
+        fs: Arc<dyn Fs>,
+        cx: &mut AppContext,
+    ) -> Model<Self> {
+        let this = Self::local(client, node, user_store, languages, fs, cx);
+        this.update(cx, |this, cx| {
+            ssh_session.add_message_handler(cx.weak_model(), Self::handle_update_worktree);
+            ssh_session.add_message_handler(cx.weak_model(), Self::handle_create_buffer_for_peer);
+            this.ssh_session = Some(ssh_session);
+        });
+        this
+    }
+
     pub async fn remote(
         remote_id: u64,
         client: Arc<Client>,
@@ -924,6 +947,7 @@ impl Project {
                 snippets,
                 yarn,
                 fs,
+                ssh_session: None,
                 next_entry_id: Default::default(),
                 next_diagnostic_group_id: Default::default(),
                 diagnostic_summaries: Default::default(),
@@ -1628,7 +1652,7 @@ impl Project {
                                                     this.client.send(
                                                         proto::UpdateDiagnosticSummary {
                                                             project_id,
-                                                            worktree_id: cx.entity_id().as_u64(),
+                                                            worktree_id: worktree.id().to_proto(),
                                                             summary: Some(
                                                                 summary.to_proto(server_id, path),
                                                             ),
@@ -2442,7 +2466,7 @@ impl Project {
                             .send(proto::UpdateBufferFile {
                                 project_id,
                                 buffer_id: buffer_id.into(),
-                                file: Some(new_file.to_proto()),
+                                file: Some(new_file.to_proto(cx)),
                             })
                             .log_err();
                     }
@@ -2464,11 +2488,23 @@ impl Project {
             self.request_buffer_diff_recalculation(&buffer, cx);
         }
 
+        let buffer_id = buffer.read(cx).remote_id();
         match event {
             BufferEvent::Operation(operation) => {
+                let operation = language::proto::serialize_operation(operation);
+
+                if let Some(ssh) = &self.ssh_session {
+                    ssh.send(proto::UpdateBuffer {
+                        project_id: 0,
+                        buffer_id: buffer_id.to_proto(),
+                        operations: vec![operation.clone()],
+                    })
+                    .ok();
+                }
+
                 self.enqueue_buffer_ordered_message(BufferOrderedMessage::Operation {
-                    buffer_id: buffer.read(cx).remote_id(),
-                    operation: language::proto::serialize_operation(operation),
+                    buffer_id,
+                    operation,
                 })
                 .ok();
             }
@@ -2948,9 +2984,10 @@ impl Project {
         language: Arc<Language>,
         cx: &mut ModelContext<Self>,
     ) {
-        let root_file = worktree.update(cx, |tree, cx| tree.root_file(cx));
+        let (root_file, is_local) =
+            worktree.update(cx, |tree, cx| (tree.root_file(cx), tree.is_local()));
         let settings = language_settings(Some(&language), root_file.map(|f| f as _).as_ref(), cx);
-        if !settings.enable_language_server {
+        if !settings.enable_language_server || !is_local {
             return;
         }
 
@@ -7632,7 +7669,9 @@ impl Project {
     ) -> Task<Result<Model<Worktree>>> {
         let path: Arc<Path> = abs_path.as_ref().into();
         if !self.loading_worktrees.contains_key(&path) {
-            let task = if self.is_local() {
+            let task = if self.ssh_session.is_some() {
+                self.create_ssh_worktree(abs_path, visible, cx)
+            } else if self.is_local() {
                 self.create_local_worktree(abs_path, visible, cx)
             } else if self.dev_server_project_id.is_some() {
                 self.create_dev_server_worktree(abs_path, cx)
@@ -7651,6 +7690,39 @@ impl Project {
         })
     }
 
+    fn create_ssh_worktree(
+        &mut self,
+        abs_path: impl AsRef<Path>,
+        visible: bool,
+        cx: &mut ModelContext<Self>,
+    ) -> Task<Result<Model<Worktree>, Arc<anyhow::Error>>> {
+        let ssh = self.ssh_session.clone().unwrap();
+        let abs_path = abs_path.as_ref();
+        let root_name = abs_path.file_name().unwrap().to_string_lossy().to_string();
+        let path = abs_path.to_string_lossy().to_string();
+        cx.spawn(|this, mut cx| async move {
+            let response = ssh.request(AddWorktree { path: path.clone() }).await?;
+            let worktree = cx.update(|cx| {
+                Worktree::remote(
+                    0,
+                    0,
+                    proto::WorktreeMetadata {
+                        id: response.worktree_id,
+                        root_name,
+                        visible,
+                        abs_path: path,
+                    },
+                    ssh.clone().into(),
+                    cx,
+                )
+            })?;
+
+            this.update(&mut cx, |this, cx| this.add_worktree(&worktree, cx))?;
+
+            Ok(worktree)
+        })
+    }
+
     fn create_local_worktree(
         &mut self,
         abs_path: impl AsRef<Path>,
@@ -7922,7 +7994,7 @@ impl Project {
                             .send(proto::UpdateBufferFile {
                                 project_id,
                                 buffer_id: buffer.read(cx).remote_id().into(),
-                                file: Some(new_file.to_proto()),
+                                file: Some(new_file.to_proto(cx)),
                             })
                             .log_err();
                     }
@@ -9073,6 +9145,13 @@ impl Project {
         mut cx: AsyncAppContext,
     ) -> Result<proto::Ack> {
         this.update(&mut cx, |this, cx| {
+            if let Some(ssh) = &this.ssh_session {
+                let mut payload = envelope.payload.clone();
+                payload.project_id = 0;
+                cx.background_executor()
+                    .spawn(ssh.request(payload))
+                    .detach_and_log_err(cx);
+            }
             this.buffer_store.update(cx, |buffer_store, cx| {
                 buffer_store.handle_update_buffer(envelope, this.is_remote(), cx)
             })
@@ -9231,7 +9310,7 @@ impl Project {
                             .send(proto::UpdateBufferFile {
                                 project_id,
                                 buffer_id: buffer_id.into(),
-                                file: Some(file.to_proto()),
+                                file: Some(file.to_proto(cx)),
                             })
                             .log_err();
                     }

crates/proto/proto/zed.proto 🔗

@@ -269,6 +269,10 @@ message Envelope {
         ListRemoteDirectory list_remote_directory = 219;
         ListRemoteDirectoryResponse list_remote_directory_response = 220;
         UpdateDevServerProject update_dev_server_project = 221; // current max
+
+        // Remote
+        AddWorktree add_worktree = 500;
+        AddWorktreeResponse add_worktree_response = 501;
     }
 
     reserved 158 to 161;
@@ -2426,3 +2430,13 @@ message SynchronizeContexts {
 message SynchronizeContextsResponse {
     repeated ContextVersion contexts = 1;
 }
+
+// Remote FS
+
+message AddWorktree {
+    string path = 1;
+}
+
+message AddWorktreeResponse {
+    uint64 worktree_id = 1;
+}

crates/proto/src/macros.rs 🔗

@@ -1,7 +1,7 @@
 #[macro_export]
 macro_rules! messages {
     ($(($name:ident, $priority:ident)),* $(,)?) => {
-        pub fn build_typed_envelope(sender_id: PeerId, received_at: Instant, envelope: Envelope) -> Option<Box<dyn AnyTypedEnvelope>> {
+        pub fn build_typed_envelope(sender_id: PeerId, received_at: std::time::Instant, envelope: Envelope) -> Option<Box<dyn AnyTypedEnvelope>> {
             match envelope.payload {
                 $(Some(envelope::Payload::$name(payload)) => {
                     Some(Box::new(TypedEnvelope {

crates/proto/src/proto.rs 🔗

@@ -18,7 +18,7 @@ use std::{
     fmt::{self, Debug},
     iter, mem,
     sync::Arc,
-    time::{Duration, Instant, SystemTime, UNIX_EPOCH},
+    time::{Duration, SystemTime, UNIX_EPOCH},
 };
 
 include!(concat!(env!("OUT_DIR"), "/zed.messages.rs"));
@@ -395,6 +395,9 @@ messages!(
     (UpdateContext, Foreground),
     (SynchronizeContexts, Foreground),
     (SynchronizeContextsResponse, Foreground),
+    // Remote development
+    (AddWorktree, Foreground),
+    (AddWorktreeResponse, Foreground),
 );
 
 request_messages!(
@@ -512,6 +515,8 @@ request_messages!(
     (RestartLanguageServers, Ack),
     (OpenContext, OpenContextResponse),
     (SynchronizeContexts, SynchronizeContextsResponse),
+    // Remote development
+    (AddWorktree, AddWorktreeResponse),
 );
 
 entity_messages!(

crates/remote/Cargo.toml 🔗

@@ -0,0 +1,36 @@
+[package]
+name = "remote"
+description = "Client-side subsystem for remote editing"
+edition = "2021"
+version = "0.1.0"
+publish = false
+license = "GPL-3.0-or-later"
+
+[lints]
+workspace = true
+
+[lib]
+path = "src/remote.rs"
+doctest = false
+
+[features]
+default = []
+test-support = ["fs/test-support"]
+
+[dependencies]
+anyhow.workspace = true
+collections.workspace = true
+fs.workspace = true
+futures.workspace = true
+gpui.workspace = true
+log.workspace = true
+parking_lot.workspace = true
+prost.workspace = true
+rpc.workspace = true
+smol.workspace = true
+tempfile.workspace = true
+util.workspace = true
+
+[dev-dependencies]
+gpui = { workspace = true, features = ["test-support"] }
+fs = { workspace = true, features = ["test-support"] }

crates/remote/src/protocol.rs 🔗

@@ -0,0 +1,51 @@
+use anyhow::Result;
+use futures::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
+use prost::Message as _;
+use rpc::proto::Envelope;
+use std::mem::size_of;
+
+#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
+pub struct MessageId(pub u32);
+
+pub type MessageLen = u32;
+pub const MESSAGE_LEN_SIZE: usize = size_of::<MessageLen>();
+
+pub fn message_len_from_buffer(buffer: &[u8]) -> MessageLen {
+    MessageLen::from_le_bytes(buffer.try_into().unwrap())
+}
+
+pub async fn read_message_with_len<S: AsyncRead + Unpin>(
+    stream: &mut S,
+    buffer: &mut Vec<u8>,
+    message_len: MessageLen,
+) -> Result<Envelope> {
+    buffer.resize(message_len as usize, 0);
+    stream.read_exact(buffer).await?;
+    Ok(Envelope::decode(buffer.as_slice())?)
+}
+
+pub async fn read_message<S: AsyncRead + Unpin>(
+    stream: &mut S,
+    buffer: &mut Vec<u8>,
+) -> Result<Envelope> {
+    buffer.resize(MESSAGE_LEN_SIZE, 0);
+    stream.read_exact(buffer).await?;
+    let len = message_len_from_buffer(buffer);
+    read_message_with_len(stream, buffer, len).await
+}
+
+pub async fn write_message<S: AsyncWrite + Unpin>(
+    stream: &mut S,
+    buffer: &mut Vec<u8>,
+    message: Envelope,
+) -> Result<()> {
+    let message_len = message.encoded_len() as u32;
+    stream
+        .write_all(message_len.to_le_bytes().as_slice())
+        .await?;
+    buffer.clear();
+    buffer.reserve(message_len as usize);
+    message.encode(buffer)?;
+    stream.write_all(buffer).await?;
+    Ok(())
+}

crates/remote/src/remote.rs 🔗

@@ -0,0 +1,4 @@
+pub mod protocol;
+pub mod ssh_session;
+
+pub use ssh_session::{SshClientDelegate, SshPlatform, SshSession};

crates/remote/src/ssh_session.rs 🔗

@@ -0,0 +1,643 @@
+use crate::protocol::{
+    message_len_from_buffer, read_message_with_len, write_message, MessageId, MESSAGE_LEN_SIZE,
+};
+use anyhow::{anyhow, Context as _, Result};
+use collections::HashMap;
+use futures::{
+    channel::{mpsc, oneshot},
+    future::{BoxFuture, LocalBoxFuture},
+    select_biased, AsyncReadExt as _, AsyncWriteExt as _, Future, FutureExt as _, StreamExt as _,
+};
+use gpui::{AppContext, AsyncAppContext, Model, SemanticVersion, WeakModel};
+use parking_lot::Mutex;
+use rpc::{
+    proto::{
+        self, build_typed_envelope, AnyTypedEnvelope, Envelope, EnvelopedMessage, PeerId,
+        ProtoClient, RequestMessage,
+    },
+    TypedEnvelope,
+};
+use smol::{
+    fs,
+    process::{self, Stdio},
+};
+use std::{
+    any::TypeId,
+    ffi::OsStr,
+    path::{Path, PathBuf},
+    sync::{
+        atomic::{AtomicU32, Ordering::SeqCst},
+        Arc,
+    },
+    time::Instant,
+};
+use tempfile::TempDir;
+
+pub struct SshSession {
+    next_message_id: AtomicU32,
+    response_channels: ResponseChannels,
+    outgoing_tx: mpsc::UnboundedSender<Envelope>,
+    spawn_process_tx: mpsc::UnboundedSender<SpawnRequest>,
+    message_handlers: Mutex<
+        HashMap<
+            TypeId,
+            Arc<
+                dyn Send
+                    + Sync
+                    + Fn(
+                        Box<dyn AnyTypedEnvelope>,
+                        Arc<SshSession>,
+                        AsyncAppContext,
+                    ) -> Option<LocalBoxFuture<'static, Result<()>>>,
+            >,
+        >,
+    >,
+}
+
+struct SshClientState {
+    socket_path: PathBuf,
+    port: u16,
+    url: String,
+    _master_process: process::Child,
+    _temp_dir: TempDir,
+}
+
+struct SpawnRequest {
+    command: String,
+    process_tx: oneshot::Sender<process::Child>,
+}
+
+#[derive(Copy, Clone, Debug)]
+pub struct SshPlatform {
+    pub os: &'static str,
+    pub arch: &'static str,
+}
+
+pub trait SshClientDelegate {
+    fn ask_password(
+        &self,
+        prompt: String,
+        cx: &mut AsyncAppContext,
+    ) -> oneshot::Receiver<Result<String>>;
+    fn remote_server_binary_path(&self, cx: &mut AsyncAppContext) -> Result<PathBuf>;
+    fn get_server_binary(
+        &self,
+        platform: SshPlatform,
+        cx: &mut AsyncAppContext,
+    ) -> oneshot::Receiver<Result<(PathBuf, SemanticVersion)>>;
+}
+
+type ResponseChannels = Mutex<HashMap<MessageId, oneshot::Sender<(Envelope, oneshot::Sender<()>)>>>;
+
+impl SshSession {
+    pub async fn client(
+        user: String,
+        host: String,
+        port: u16,
+        delegate: Arc<dyn SshClientDelegate>,
+        cx: &mut AsyncAppContext,
+    ) -> Result<Arc<Self>> {
+        let client_state = SshClientState::new(user, host, port, delegate.clone(), cx).await?;
+
+        let platform = query_platform(&client_state).await?;
+        let (local_binary_path, version) = delegate.get_server_binary(platform, cx).await??;
+        let remote_binary_path = delegate.remote_server_binary_path(cx)?;
+        ensure_server_binary(
+            &client_state,
+            &local_binary_path,
+            &remote_binary_path,
+            version,
+        )
+        .await?;
+
+        let (spawn_process_tx, mut spawn_process_rx) = mpsc::unbounded::<SpawnRequest>();
+        let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded::<Envelope>();
+        let (incoming_tx, incoming_rx) = mpsc::unbounded::<Envelope>();
+
+        let mut remote_server_child = client_state
+            .ssh_command(&remote_binary_path)
+            .arg("run")
+            .spawn()
+            .context("failed to spawn remote server")?;
+        let mut child_stderr = remote_server_child.stderr.take().unwrap();
+        let mut child_stdout = remote_server_child.stdout.take().unwrap();
+        let mut child_stdin = remote_server_child.stdin.take().unwrap();
+
+        let executor = cx.background_executor().clone();
+        executor.clone().spawn(async move {
+            let mut stdin_buffer = Vec::new();
+            let mut stdout_buffer = Vec::new();
+            let mut stderr_buffer = Vec::new();
+            let mut stderr_offset = 0;
+
+            loop {
+                stdout_buffer.resize(MESSAGE_LEN_SIZE, 0);
+                stderr_buffer.resize(stderr_offset + 1024, 0);
+
+                select_biased! {
+                    outgoing = outgoing_rx.next().fuse() => {
+                        let Some(outgoing) = outgoing else {
+                            return anyhow::Ok(());
+                        };
+
+                        write_message(&mut child_stdin, &mut stdin_buffer, outgoing).await?;
+                    }
+
+                    request = spawn_process_rx.next().fuse() => {
+                        let Some(request) = request else {
+                            return Ok(());
+                        };
+
+                        log::info!("spawn process: {:?}", request.command);
+                        let child = client_state
+                            .ssh_command(&request.command)
+                            .spawn()
+                            .context("failed to create channel")?;
+                        request.process_tx.send(child).ok();
+                    }
+
+                    result = child_stdout.read(&mut stdout_buffer).fuse() => {
+                        match result {
+                            Ok(len) => {
+                                if len == 0 {
+                                    child_stdin.close().await?;
+                                    let status = remote_server_child.status().await?;
+                                    if !status.success() {
+                                        log::info!("channel exited with status: {status:?}");
+                                    }
+                                    return Ok(());
+                                }
+
+                                if len < stdout_buffer.len() {
+                                    child_stdout.read_exact(&mut stdout_buffer[len..]).await?;
+                                }
+
+                                let message_len = message_len_from_buffer(&stdout_buffer);
+                                match read_message_with_len(&mut child_stdout, &mut stdout_buffer, message_len).await {
+                                    Ok(envelope) => {
+                                        incoming_tx.unbounded_send(envelope).ok();
+                                    }
+                                    Err(error) => {
+                                        log::error!("error decoding message {error:?}");
+                                    }
+                                }
+                            }
+                            Err(error) => {
+                                Err(anyhow!("error reading stdout: {error:?}"))?;
+                            }
+                        }
+                    }
+
+                    result = child_stderr.read(&mut stderr_buffer[stderr_offset..]).fuse() => {
+                        match result {
+                            Ok(len) => {
+                                stderr_offset += len;
+                                let mut start_ix = 0;
+                                while let Some(ix) = stderr_buffer[start_ix..stderr_offset].iter().position(|b| b == &b'\n') {
+                                    let line_ix = start_ix + ix;
+                                    let content = String::from_utf8_lossy(&stderr_buffer[start_ix..line_ix]);
+                                    start_ix = line_ix + 1;
+                                    eprintln!("(remote) {}", content);
+                                }
+                                stderr_buffer.drain(0..start_ix);
+                                stderr_offset -= start_ix;
+                            }
+                            Err(error) => {
+                                Err(anyhow!("error reading stderr: {error:?}"))?;
+                            }
+                        }
+                    }
+                }
+            }
+        }).detach();
+
+        cx.update(|cx| Self::new(incoming_rx, outgoing_tx, spawn_process_tx, cx))
+    }
+
+    pub fn server(
+        incoming_rx: mpsc::UnboundedReceiver<Envelope>,
+        outgoing_tx: mpsc::UnboundedSender<Envelope>,
+        cx: &AppContext,
+    ) -> Arc<SshSession> {
+        let (tx, _rx) = mpsc::unbounded();
+        Self::new(incoming_rx, outgoing_tx, tx, cx)
+    }
+
+    #[cfg(any(test, feature = "test-support"))]
+    pub fn fake(
+        client_cx: &mut gpui::TestAppContext,
+        server_cx: &mut gpui::TestAppContext,
+    ) -> (Arc<Self>, Arc<Self>) {
+        let (server_to_client_tx, server_to_client_rx) = mpsc::unbounded();
+        let (client_to_server_tx, client_to_server_rx) = mpsc::unbounded();
+        let (tx, _rx) = mpsc::unbounded();
+        (
+            client_cx
+                .update(|cx| Self::new(server_to_client_rx, client_to_server_tx, tx.clone(), cx)),
+            server_cx
+                .update(|cx| Self::new(client_to_server_rx, server_to_client_tx, tx.clone(), cx)),
+        )
+    }
+
+    fn new(
+        mut incoming_rx: mpsc::UnboundedReceiver<Envelope>,
+        outgoing_tx: mpsc::UnboundedSender<Envelope>,
+        spawn_process_tx: mpsc::UnboundedSender<SpawnRequest>,
+        cx: &AppContext,
+    ) -> Arc<SshSession> {
+        let this = Arc::new(Self {
+            next_message_id: AtomicU32::new(0),
+            response_channels: ResponseChannels::default(),
+            outgoing_tx,
+            spawn_process_tx,
+            message_handlers: Default::default(),
+        });
+
+        cx.spawn(|cx| {
+            let this = this.clone();
+            async move {
+                let peer_id = PeerId { owner_id: 0, id: 0 };
+                while let Some(incoming) = incoming_rx.next().await {
+                    if let Some(request_id) = incoming.responding_to {
+                        let request_id = MessageId(request_id);
+                        let sender = this.response_channels.lock().remove(&request_id);
+                        if let Some(sender) = sender {
+                            let (tx, rx) = oneshot::channel();
+                            if incoming.payload.is_some() {
+                                sender.send((incoming, tx)).ok();
+                            }
+                            rx.await.ok();
+                        }
+                    } else if let Some(envelope) =
+                        build_typed_envelope(peer_id, Instant::now(), incoming)
+                    {
+                        log::debug!(
+                            "ssh message received. name:{}",
+                            envelope.payload_type_name()
+                        );
+                        let type_id = envelope.payload_type_id();
+                        let handler = this.message_handlers.lock().get(&type_id).cloned();
+                        if let Some(handler) = handler {
+                            if let Some(future) = handler(envelope, this.clone(), cx.clone()) {
+                                future.await.ok();
+                            } else {
+                                this.message_handlers.lock().remove(&type_id);
+                            }
+                        }
+                    }
+                }
+                anyhow::Ok(())
+            }
+        })
+        .detach();
+
+        this
+    }
+
+    pub fn request<T: RequestMessage>(
+        &self,
+        payload: T,
+    ) -> impl 'static + Future<Output = Result<T::Response>> {
+        log::debug!("ssh request start. name:{}", T::NAME);
+        let response = self.request_dynamic(payload.into_envelope(0, None, None), "");
+        async move {
+            let response = response.await?;
+            log::debug!("ssh request finish. name:{}", T::NAME);
+            T::Response::from_envelope(response)
+                .ok_or_else(|| anyhow!("received a response of the wrong type"))
+        }
+    }
+
+    pub fn send<T: EnvelopedMessage>(&self, payload: T) -> Result<()> {
+        self.send_dynamic(payload.into_envelope(0, None, None))
+    }
+
+    pub fn request_dynamic(
+        &self,
+        mut envelope: proto::Envelope,
+        _request_type: &'static str,
+    ) -> impl 'static + Future<Output = Result<proto::Envelope>> {
+        envelope.id = self.next_message_id.fetch_add(1, SeqCst);
+        let (tx, rx) = oneshot::channel();
+        self.response_channels
+            .lock()
+            .insert(MessageId(envelope.id), tx);
+        self.outgoing_tx.unbounded_send(envelope).ok();
+        async move { Ok(rx.await.context("connection lost")?.0) }
+    }
+
+    pub fn send_dynamic(&self, mut envelope: proto::Envelope) -> Result<()> {
+        envelope.id = self.next_message_id.fetch_add(1, SeqCst);
+        self.outgoing_tx.unbounded_send(envelope)?;
+        Ok(())
+    }
+
+    pub async fn spawn_process(&self, command: String) -> process::Child {
+        let (process_tx, process_rx) = oneshot::channel();
+        self.spawn_process_tx
+            .unbounded_send(SpawnRequest {
+                command,
+                process_tx,
+            })
+            .ok();
+        process_rx.await.unwrap()
+    }
+
+    pub fn add_message_handler<M, E, H, F>(&self, entity: WeakModel<E>, handler: H)
+    where
+        M: EnvelopedMessage,
+        E: 'static,
+        H: 'static + Sync + Send + Fn(Model<E>, TypedEnvelope<M>, AsyncAppContext) -> F,
+        F: 'static + Future<Output = Result<()>>,
+    {
+        let message_type_id = TypeId::of::<M>();
+        self.message_handlers.lock().insert(
+            message_type_id,
+            Arc::new(move |envelope, _, cx| {
+                let entity = entity.upgrade()?;
+                let envelope = envelope.into_any().downcast::<TypedEnvelope<M>>().unwrap();
+                Some(handler(entity, *envelope, cx).boxed_local())
+            }),
+        );
+    }
+
+    pub fn add_request_handler<M, E, H, F>(&self, entity: WeakModel<E>, handler: H)
+    where
+        M: EnvelopedMessage + RequestMessage,
+        E: 'static,
+        H: 'static + Sync + Send + Fn(Model<E>, TypedEnvelope<M>, AsyncAppContext) -> F,
+        F: 'static + Future<Output = Result<M::Response>>,
+    {
+        let message_type_id = TypeId::of::<M>();
+        self.message_handlers.lock().insert(
+            message_type_id,
+            Arc::new(move |envelope, this, cx| {
+                let entity = entity.upgrade()?;
+                let envelope = envelope.into_any().downcast::<TypedEnvelope<M>>().unwrap();
+                let request_id = envelope.message_id();
+                Some(
+                    handler(entity, *envelope, cx)
+                        .then(move |result| async move {
+                            this.outgoing_tx.unbounded_send(result?.into_envelope(
+                                this.next_message_id.fetch_add(1, SeqCst),
+                                Some(request_id),
+                                None,
+                            ))?;
+                            Ok(())
+                        })
+                        .boxed_local(),
+                )
+            }),
+        );
+    }
+}
+
+impl ProtoClient for SshSession {
+    fn request(
+        &self,
+        envelope: proto::Envelope,
+        request_type: &'static str,
+    ) -> BoxFuture<'static, Result<proto::Envelope>> {
+        self.request_dynamic(envelope, request_type).boxed()
+    }
+
+    fn send(&self, envelope: proto::Envelope) -> Result<()> {
+        self.send_dynamic(envelope)
+    }
+}
+
+impl SshClientState {
+    #[cfg(not(unix))]
+    async fn new(
+        user: String,
+        host: String,
+        port: u16,
+        delegate: Arc<dyn SshClientDelegate>,
+        cx: &AsyncAppContext,
+    ) -> Result<Self> {
+        Err(anyhow!("ssh is not supported on this platform"))
+    }
+
+    #[cfg(unix)]
+    async fn new(
+        user: String,
+        host: String,
+        port: u16,
+        delegate: Arc<dyn SshClientDelegate>,
+        cx: &AsyncAppContext,
+    ) -> Result<Self> {
+        use smol::fs::unix::PermissionsExt as _;
+        use util::ResultExt as _;
+
+        let url = format!("{user}@{host}");
+        let temp_dir = tempfile::Builder::new()
+            .prefix("zed-ssh-session")
+            .tempdir()?;
+
+        // Create a TCP listener to handle requests from the askpass program.
+        let listener = smol::net::TcpListener::bind("127.0.0.1:0")
+            .await
+            .expect("failed to find open port");
+        let askpass_port = listener.local_addr().unwrap().port();
+        let askpass_task = cx.spawn(|mut cx| async move {
+            while let Ok((mut stream, _)) = listener.accept().await {
+                let mut buffer = Vec::new();
+                if stream.read_to_end(&mut buffer).await.is_err() {
+                    buffer.clear();
+                }
+                let password_prompt = String::from_utf8_lossy(&buffer);
+                if let Some(password) = delegate
+                    .ask_password(password_prompt.to_string(), &mut cx)
+                    .await
+                    .context("failed to get ssh password")
+                    .and_then(|p| p)
+                    .log_err()
+                {
+                    stream.write_all(password.as_bytes()).await.log_err();
+                }
+            }
+        });
+
+        // Create an askpass script that communicates back to this process using TCP.
+        let askpass_script = format!(
+            "{shebang}\n echo \"$@\" | nc 127.0.0.1 {askpass_port} 2> /dev/null",
+            shebang = "#!/bin/sh"
+        );
+        let askpass_script_path = temp_dir.path().join("askpass.sh");
+        fs::write(&askpass_script_path, askpass_script).await?;
+        fs::set_permissions(&askpass_script_path, std::fs::Permissions::from_mode(0o755)).await?;
+
+        // Start the master SSH process, which does not do anything except for establish
+        // the connection and keep it open, allowing other ssh commands to reuse it
+        // via a control socket.
+        let socket_path = temp_dir.path().join("ssh.sock");
+        let mut master_process = process::Command::new("ssh")
+            .stdin(Stdio::null())
+            .stdout(Stdio::piped())
+            .stderr(Stdio::piped())
+            .env("SSH_ASKPASS_REQUIRE", "force")
+            .env("SSH_ASKPASS", &askpass_script_path)
+            .args(["-N", "-o", "ControlMaster=yes", "-o"])
+            .arg(format!("ControlPath={}", socket_path.display()))
+            .args(["-p", &port.to_string()])
+            .arg(&url)
+            .spawn()?;
+
+        // Wait for this ssh process to close its stdout, indicating that authentication
+        // has completed.
+        let stdout = master_process.stdout.as_mut().unwrap();
+        let mut output = Vec::new();
+        stdout.read_to_end(&mut output).await?;
+        drop(askpass_task);
+
+        if master_process.try_status()?.is_some() {
+            output.clear();
+            let mut stderr = master_process.stderr.take().unwrap();
+            stderr.read_to_end(&mut output).await?;
+            Err(anyhow!(
+                "failed to connect: {}",
+                String::from_utf8_lossy(&output)
+            ))?;
+        }
+
+        Ok(Self {
+            _master_process: master_process,
+            port,
+            _temp_dir: temp_dir,
+            socket_path,
+            url,
+        })
+    }
+
+    async fn upload_file(&self, src_path: &Path, dest_path: &Path) -> Result<()> {
+        let mut command = process::Command::new("scp");
+        let output = self
+            .ssh_options(&mut command)
+            .arg("-P")
+            .arg(&self.port.to_string())
+            .arg(&src_path)
+            .arg(&format!("{}:{}", self.url, dest_path.display()))
+            .output()
+            .await?;
+
+        if output.status.success() {
+            Ok(())
+        } else {
+            Err(anyhow!(
+                "failed to upload file {} -> {}: {}",
+                src_path.display(),
+                dest_path.display(),
+                String::from_utf8_lossy(&output.stderr)
+            ))
+        }
+    }
+
+    fn ssh_command<S: AsRef<OsStr>>(&self, program: S) -> process::Command {
+        let mut command = process::Command::new("ssh");
+        self.ssh_options(&mut command)
+            .arg("-p")
+            .arg(&self.port.to_string())
+            .arg(&self.url)
+            .arg(program);
+        command
+    }
+
+    fn ssh_options<'a>(&self, command: &'a mut process::Command) -> &'a mut process::Command {
+        command
+            .stdin(Stdio::piped())
+            .stdout(Stdio::piped())
+            .stderr(Stdio::piped())
+            .args(["-o", "ControlMaster=no", "-o"])
+            .arg(format!("ControlPath={}", self.socket_path.display()))
+    }
+}
+
+async fn run_cmd(command: &mut process::Command) -> Result<String> {
+    let output = command.output().await?;
+    if output.status.success() {
+        Ok(String::from_utf8_lossy(&output.stdout).to_string())
+    } else {
+        Err(anyhow!(
+            "failed to run command: {}",
+            String::from_utf8_lossy(&output.stderr)
+        ))
+    }
+}
+
+async fn query_platform(session: &SshClientState) -> Result<SshPlatform> {
+    let os = run_cmd(session.ssh_command("uname").arg("-s")).await?;
+    let arch = run_cmd(session.ssh_command("uname").arg("-m")).await?;
+
+    let os = match os.trim() {
+        "Darwin" => "macos",
+        "Linux" => "linux",
+        _ => Err(anyhow!("unknown uname os {os:?}"))?,
+    };
+    let arch = if arch.starts_with("arm") || arch.starts_with("aarch64") {
+        "aarch64"
+    } else if arch.starts_with("x86") || arch.starts_with("i686") {
+        "x86_64"
+    } else {
+        Err(anyhow!("unknown uname architecture {arch:?}"))?
+    };
+
+    Ok(SshPlatform { os, arch })
+}
+
+async fn ensure_server_binary(
+    session: &SshClientState,
+    src_path: &Path,
+    dst_path: &Path,
+    version: SemanticVersion,
+) -> Result<()> {
+    let mut dst_path_gz = dst_path.to_path_buf();
+    dst_path_gz.set_extension("gz");
+
+    if let Some(parent) = dst_path.parent() {
+        run_cmd(session.ssh_command("mkdir").arg("-p").arg(parent)).await?;
+    }
+
+    let mut server_binary_exists = false;
+    if let Ok(installed_version) = run_cmd(session.ssh_command(&dst_path).arg("version")).await {
+        if installed_version.trim() == version.to_string() {
+            server_binary_exists = true;
+        }
+    }
+
+    if server_binary_exists {
+        log::info!("remote development server already present",);
+        return Ok(());
+    }
+
+    let src_stat = fs::metadata(src_path).await?;
+    let size = src_stat.len();
+    let server_mode = 0o755;
+
+    let t0 = Instant::now();
+    log::info!("uploading remote development server ({}kb)", size / 1024);
+    session
+        .upload_file(src_path, &dst_path_gz)
+        .await
+        .context("failed to upload server binary")?;
+    log::info!("uploaded remote development server in {:?}", t0.elapsed());
+
+    log::info!("extracting remote development server");
+    run_cmd(
+        session
+            .ssh_command("gunzip")
+            .arg("--force")
+            .arg(&dst_path_gz),
+    )
+    .await?;
+
+    log::info!("unzipping remote development server");
+    run_cmd(
+        session
+            .ssh_command("chmod")
+            .arg(format!("{:o}", server_mode))
+            .arg(&dst_path),
+    )
+    .await?;
+
+    Ok(())
+}

crates/remote_server/Cargo.toml 🔗

@@ -0,0 +1,51 @@
+[package]
+name = "remote_server"
+description = "Daemon used for remote editing"
+edition = "2021"
+version = "0.1.0"
+publish = false
+license = "GPL-3.0-or-later"
+
+[lints]
+workspace = true
+
+[lib]
+path = "src/remote_server.rs"
+doctest = false
+
+[[bin]]
+name = "remote_server"
+
+[features]
+default = []
+test-support = ["fs/test-support"]
+
+[dependencies]
+anyhow.workspace = true
+env_logger.workspace = true
+fs.workspace = true
+futures.workspace = true
+gpui.workspace = true
+log.workspace = true
+project.workspace = true
+remote.workspace = true
+rpc.workspace = true
+settings.workspace = true
+smol.workspace = true
+worktree.workspace = true
+
+[dev-dependencies]
+client = { workspace = true, features = ["test-support"] }
+clock = { workspace = true, features = ["test-support"] }
+fs = { workspace = true, features = ["test-support"] }
+gpui = { workspace = true, features = ["test-support"] }
+http = { workspace = true, features = ["test-support"] }
+language = { workspace = true, features = ["test-support"] }
+node_runtime = { workspace = true, features = ["test-support"] }
+remote = { workspace = true, features = ["test-support"] }
+
+serde_json.workspace = true
+
+[build-dependencies]
+cargo_toml.workspace = true
+toml.workspace = true

crates/remote_server/build.rs 🔗

@@ -0,0 +1,10 @@
+const ZED_MANIFEST: &str = include_str!("../zed/Cargo.toml");
+
+fn main() {
+    let zed_cargo_toml: cargo_toml::Manifest =
+        toml::from_str(ZED_MANIFEST).expect("failed to parse zed Cargo.toml");
+    println!(
+        "cargo:rustc-env=ZED_PKG_VERSION={}",
+        zed_cargo_toml.package.unwrap().version.unwrap()
+    );
+}

crates/remote_server/src/headless_project.rs 🔗

@@ -0,0 +1,166 @@
+use anyhow::{Context as _, Result};
+use fs::Fs;
+use gpui::{AppContext, AsyncAppContext, Context, Model, ModelContext};
+use project::{buffer_store::BufferStore, ProjectPath, WorktreeId, WorktreeSettings};
+use remote::SshSession;
+use rpc::{
+    proto::{self, AnyProtoClient, PeerId},
+    TypedEnvelope,
+};
+use settings::{Settings as _, SettingsStore};
+use std::{
+    path::{Path, PathBuf},
+    sync::{atomic::AtomicUsize, Arc},
+};
+use worktree::Worktree;
+
+const PEER_ID: PeerId = PeerId { owner_id: 0, id: 0 };
+const PROJECT_ID: u64 = 0;
+
+pub struct HeadlessProject {
+    pub fs: Arc<dyn Fs>,
+    pub session: AnyProtoClient,
+    pub worktrees: Vec<Model<Worktree>>,
+    pub buffer_store: Model<BufferStore>,
+    pub next_entry_id: Arc<AtomicUsize>,
+}
+
+impl HeadlessProject {
+    pub fn init(cx: &mut AppContext) {
+        cx.set_global(SettingsStore::default());
+        WorktreeSettings::register(cx);
+    }
+
+    pub fn new(session: Arc<SshSession>, fs: Arc<dyn Fs>, cx: &mut ModelContext<Self>) -> Self {
+        let this = cx.weak_model();
+
+        session.add_request_handler(this.clone(), Self::handle_add_worktree);
+        session.add_request_handler(this.clone(), Self::handle_open_buffer_by_path);
+        session.add_request_handler(this.clone(), Self::handle_update_buffer);
+        session.add_request_handler(this.clone(), Self::handle_save_buffer);
+
+        HeadlessProject {
+            session: session.into(),
+            fs,
+            worktrees: Vec::new(),
+            buffer_store: cx.new_model(|_| BufferStore::new(true)),
+            next_entry_id: Default::default(),
+        }
+    }
+
+    fn worktree_for_id(&self, id: WorktreeId, cx: &AppContext) -> Option<Model<Worktree>> {
+        self.worktrees
+            .iter()
+            .find(|worktree| worktree.read(cx).id() == id)
+            .cloned()
+    }
+
+    pub async fn handle_add_worktree(
+        this: Model<Self>,
+        message: TypedEnvelope<proto::AddWorktree>,
+        mut cx: AsyncAppContext,
+    ) -> Result<proto::AddWorktreeResponse> {
+        let worktree = this
+            .update(&mut cx.clone(), |this, _| {
+                Worktree::local(
+                    Path::new(&message.payload.path),
+                    true,
+                    this.fs.clone(),
+                    this.next_entry_id.clone(),
+                    &mut cx,
+                )
+            })?
+            .await?;
+
+        this.update(&mut cx, |this, cx| {
+            let session = this.session.clone();
+            this.worktrees.push(worktree.clone());
+            worktree.update(cx, |worktree, cx| {
+                worktree.observe_updates(0, cx, move |update| {
+                    session.send(update).ok();
+                    futures::future::ready(true)
+                });
+                proto::AddWorktreeResponse {
+                    worktree_id: worktree.id().to_proto(),
+                }
+            })
+        })
+    }
+
+    pub async fn handle_update_buffer(
+        this: Model<Self>,
+        envelope: TypedEnvelope<proto::UpdateBuffer>,
+        mut cx: AsyncAppContext,
+    ) -> Result<proto::Ack> {
+        this.update(&mut cx, |this, cx| {
+            this.buffer_store.update(cx, |buffer_store, cx| {
+                buffer_store.handle_update_buffer(envelope, false, cx)
+            })
+        })?
+    }
+
+    pub async fn handle_save_buffer(
+        this: Model<Self>,
+        envelope: TypedEnvelope<proto::SaveBuffer>,
+        mut cx: AsyncAppContext,
+    ) -> Result<proto::BufferSaved> {
+        let (buffer_store, worktree) = this.update(&mut cx, |this, cx| {
+            let buffer_store = this.buffer_store.clone();
+            let worktree = if let Some(path) = &envelope.payload.new_path {
+                Some(
+                    this.worktree_for_id(WorktreeId::from_proto(path.worktree_id), cx)
+                        .context("worktree does not exist")?,
+                )
+            } else {
+                None
+            };
+            anyhow::Ok((buffer_store, worktree))
+        })??;
+        BufferStore::handle_save_buffer(buffer_store, PROJECT_ID, worktree, envelope, cx).await
+    }
+
+    pub async fn handle_open_buffer_by_path(
+        this: Model<Self>,
+        message: TypedEnvelope<proto::OpenBufferByPath>,
+        mut cx: AsyncAppContext,
+    ) -> Result<proto::OpenBufferResponse> {
+        let worktree_id = WorktreeId::from_proto(message.payload.worktree_id);
+        let (buffer_store, buffer, session) = this.update(&mut cx, |this, cx| {
+            let worktree = this
+                .worktree_for_id(worktree_id, cx)
+                .context("no such worktree")?;
+            let buffer_store = this.buffer_store.clone();
+            let buffer = this.buffer_store.update(cx, |buffer_store, cx| {
+                buffer_store.open_buffer(
+                    ProjectPath {
+                        worktree_id,
+                        path: PathBuf::from(message.payload.path).into(),
+                    },
+                    worktree,
+                    cx,
+                )
+            });
+            anyhow::Ok((buffer_store, buffer, this.session.clone()))
+        })??;
+
+        let buffer = buffer.await?;
+        let buffer_id = buffer.read_with(&cx, |b, _| b.remote_id())?;
+
+        cx.spawn(|mut cx| async move {
+            BufferStore::create_buffer_for_peer(
+                buffer_store,
+                PEER_ID,
+                buffer_id,
+                PROJECT_ID,
+                session,
+                &mut cx,
+            )
+            .await
+        })
+        .detach();
+
+        Ok(proto::OpenBufferResponse {
+            buffer_id: buffer_id.to_proto(),
+        })
+    }
+}

crates/remote_server/src/main.rs 🔗

@@ -0,0 +1,78 @@
+use fs::RealFs;
+use futures::channel::mpsc;
+use gpui::Context as _;
+use remote::{
+    protocol::{read_message, write_message},
+    SshSession,
+};
+use remote_server::HeadlessProject;
+use smol::{io::AsyncWriteExt, stream::StreamExt as _, Async};
+use std::{env, io, mem, process, sync::Arc};
+
+fn main() {
+    env::set_var("RUST_BACKTRACE", "1");
+    env::set_var("RUST_LOG", "remote=trace");
+
+    let subcommand = std::env::args().nth(1);
+    match subcommand.as_deref() {
+        Some("run") => {}
+        Some("version") => {
+            println!("{}", env!("ZED_PKG_VERSION"));
+            return;
+        }
+        _ => {
+            eprintln!("usage: remote <run|version>");
+            process::exit(1);
+        }
+    }
+
+    env_logger::init();
+
+    gpui::App::headless().run(move |cx| {
+        HeadlessProject::init(cx);
+
+        let (incoming_tx, incoming_rx) = mpsc::unbounded();
+        let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded();
+
+        let mut stdin = Async::new(io::stdin()).unwrap();
+        let mut stdout = Async::new(io::stdout()).unwrap();
+
+        let session = SshSession::server(incoming_rx, outgoing_tx, cx);
+        let project = cx.new_model(|cx| {
+            HeadlessProject::new(
+                session.clone(),
+                Arc::new(RealFs::new(Default::default(), None)),
+                cx,
+            )
+        });
+
+        cx.background_executor()
+            .spawn(async move {
+                let mut output_buffer = Vec::new();
+                while let Some(message) = outgoing_rx.next().await {
+                    write_message(&mut stdout, &mut output_buffer, message).await?;
+                    stdout.flush().await?;
+                }
+                anyhow::Ok(())
+            })
+            .detach();
+
+        cx.background_executor()
+            .spawn(async move {
+                let mut input_buffer = Vec::new();
+                loop {
+                    let message = match read_message(&mut stdin, &mut input_buffer).await {
+                        Ok(message) => message,
+                        Err(error) => {
+                            log::warn!("error reading message: {:?}", error);
+                            process::exit(0);
+                        }
+                    };
+                    incoming_tx.unbounded_send(message).ok();
+                }
+            })
+            .detach();
+
+        mem::forget(project);
+    });
+}

crates/remote_server/src/remote_editing_tests.rs 🔗

@@ -0,0 +1,134 @@
+use crate::headless_project::HeadlessProject;
+use client::{Client, UserStore};
+use clock::FakeSystemClock;
+use fs::{FakeFs, Fs as _};
+use gpui::{Context, Model, TestAppContext};
+use http::FakeHttpClient;
+use language::LanguageRegistry;
+use node_runtime::FakeNodeRuntime;
+use project::Project;
+use remote::SshSession;
+use serde_json::json;
+use settings::SettingsStore;
+use std::{path::Path, sync::Arc};
+
+#[gpui::test]
+async fn test_remote_editing(cx: &mut TestAppContext, server_cx: &mut TestAppContext) {
+    let (client_ssh, server_ssh) = SshSession::fake(cx, server_cx);
+
+    let fs = FakeFs::new(server_cx.executor());
+    fs.insert_tree(
+        "/code",
+        json!({
+            "project1": {
+                "README.md": "# project 1",
+                "src": {
+                    "lib.rs": "fn one() -> usize { 1 }"
+                }
+            },
+            "project2": {
+                "README.md": "# project 2",
+            },
+        }),
+    )
+    .await;
+
+    server_cx.update(HeadlessProject::init);
+    let _headless_project =
+        server_cx.new_model(|cx| HeadlessProject::new(server_ssh, fs.clone(), cx));
+
+    let project = build_project(client_ssh, cx);
+    let (worktree, _) = project
+        .update(cx, |project, cx| {
+            project.find_or_create_worktree("/code/project1", true, cx)
+        })
+        .await
+        .unwrap();
+
+    // The client sees the worktree's contents.
+    cx.executor().run_until_parked();
+    let worktree_id = worktree.read_with(cx, |worktree, _| worktree.id());
+    worktree.update(cx, |worktree, _cx| {
+        assert_eq!(
+            worktree.paths().map(Arc::as_ref).collect::<Vec<_>>(),
+            vec![
+                Path::new("README.md"),
+                Path::new("src"),
+                Path::new("src/lib.rs"),
+            ]
+        );
+    });
+
+    // The user opens a buffer in the remote worktree. The buffer's
+    // contents are loaded from the remote filesystem.
+    let buffer = project
+        .update(cx, |project, cx| {
+            project.open_buffer((worktree_id, Path::new("src/lib.rs")), cx)
+        })
+        .await
+        .unwrap();
+    buffer.update(cx, |buffer, cx| {
+        assert_eq!(buffer.text(), "fn one() -> usize { 1 }");
+        let ix = buffer.text().find('1').unwrap();
+        buffer.edit([(ix..ix + 1, "100")], None, cx);
+    });
+
+    // The user saves the buffer. The new contents are written to the
+    // remote filesystem.
+    project
+        .update(cx, |project, cx| project.save_buffer(buffer, cx))
+        .await
+        .unwrap();
+    assert_eq!(
+        fs.load("/code/project1/src/lib.rs".as_ref()).await.unwrap(),
+        "fn one() -> usize { 100 }"
+    );
+
+    // A new file is created in the remote filesystem. The user
+    // sees the new file.
+    fs.save(
+        "/code/project1/src/main.rs".as_ref(),
+        &"fn main() {}".into(),
+        Default::default(),
+    )
+    .await
+    .unwrap();
+    cx.executor().run_until_parked();
+    worktree.update(cx, |worktree, _cx| {
+        assert_eq!(
+            worktree.paths().map(Arc::as_ref).collect::<Vec<_>>(),
+            vec![
+                Path::new("README.md"),
+                Path::new("src"),
+                Path::new("src/lib.rs"),
+                Path::new("src/main.rs"),
+            ]
+        );
+    });
+}
+
+fn build_project(ssh: Arc<SshSession>, cx: &mut TestAppContext) -> Model<Project> {
+    cx.update(|cx| {
+        let settings_store = SettingsStore::test(cx);
+        cx.set_global(settings_store);
+    });
+
+    let client = cx.update(|cx| {
+        Client::new(
+            Arc::new(FakeSystemClock::default()),
+            FakeHttpClient::with_404_response(),
+            cx,
+        )
+    });
+
+    let node = FakeNodeRuntime::new();
+    let user_store = cx.new_model(|cx| UserStore::new(client.clone(), cx));
+    let languages = Arc::new(LanguageRegistry::test(cx.executor()));
+    let fs = FakeFs::new(cx.executor());
+    cx.update(|cx| {
+        Project::init(&client, cx);
+        language::init(cx);
+    });
+
+    cx.update(|cx| Project::ssh(ssh, client, node, user_store, languages, fs, cx))
+}

crates/worktree/src/worktree.rs 🔗

@@ -13,7 +13,6 @@ use futures::{
         mpsc::{self, UnboundedSender},
         oneshot,
     },
-    future::BoxFuture,
     select_biased,
     stream::select,
     task::Poll,
@@ -129,13 +128,11 @@ struct ScanRequest {
 
 pub struct RemoteWorktree {
     snapshot: Snapshot,
-    background_snapshot: Arc<Mutex<Snapshot>>,
+    background_snapshot: Arc<Mutex<(Snapshot, Vec<proto::UpdateWorktree>)>>,
     project_id: u64,
     client: AnyProtoClient,
     updates_tx: Option<UnboundedSender<proto::UpdateWorktree>>,
-    update_observer: Arc<
-        Mutex<Option<Box<dyn Send + FnMut(proto::UpdateWorktree) -> BoxFuture<'static, bool>>>>,
-    >,
+    update_observer: Option<mpsc::UnboundedSender<proto::UpdateWorktree>>,
     snapshot_subscriptions: VecDeque<(usize, oneshot::Sender<()>)>,
     replica_id: ReplicaId,
     visible: bool,
@@ -463,10 +460,9 @@ impl Worktree {
                 Arc::from(PathBuf::from(worktree.abs_path)),
             );
 
-            let (updates_tx, mut updates_rx) = mpsc::unbounded();
-            let background_snapshot = Arc::new(Mutex::new(snapshot.clone()));
+            let background_snapshot = Arc::new(Mutex::new((snapshot.clone(), Vec::new())));
+            let (background_updates_tx, mut background_updates_rx) = mpsc::unbounded();
             let (mut snapshot_updated_tx, mut snapshot_updated_rx) = watch::channel();
-            let update_observer = Arc::new(Mutex::new(None));
 
             let worktree = RemoteWorktree {
                 client,
@@ -474,36 +470,45 @@ impl Worktree {
                 replica_id,
                 snapshot,
                 background_snapshot: background_snapshot.clone(),
-                update_observer: update_observer.clone(),
-                updates_tx: Some(updates_tx),
+                updates_tx: Some(background_updates_tx),
+                update_observer: None,
                 snapshot_subscriptions: Default::default(),
                 visible: worktree.visible,
                 disconnected: false,
             };
 
+            // Apply updates to a separate snapshto in a background task, then
+            // send them to a foreground task which updates the model.
             cx.background_executor()
                 .spawn(async move {
-                    while let Some(update) = updates_rx.next().await {
-                        let call = update_observer
-                            .lock()
-                            .as_mut()
-                            .map(|observer| (observer)(update.clone()));
-                        if let Some(call) = call {
-                            call.await;
-                        }
-                        if let Err(error) = background_snapshot.lock().apply_remote_update(update) {
-                            log::error!("error applying worktree update: {}", error);
+                    while let Some(update) = background_updates_rx.next().await {
+                        {
+                            let mut lock = background_snapshot.lock();
+                            if let Err(error) = lock.0.apply_remote_update(update.clone()) {
+                                log::error!("error applying worktree update: {}", error);
+                            }
+                            lock.1.push(update);
                         }
                         snapshot_updated_tx.send(()).await.ok();
                     }
                 })
                 .detach();
 
+            // On the foreground task, update to the latest snapshot and notify
+            // any update observer of all updates that led to that snapshot.
             cx.spawn(|this, mut cx| async move {
                 while (snapshot_updated_rx.recv().await).is_some() {
                     this.update(&mut cx, |this, cx| {
                         let this = this.as_remote_mut().unwrap();
-                        this.snapshot = this.background_snapshot.lock().clone();
+                        {
+                            let mut lock = this.background_snapshot.lock();
+                            this.snapshot = lock.0.clone();
+                            if let Some(tx) = &this.update_observer {
+                                for update in lock.1.drain(..) {
+                                    tx.unbounded_send(update).ok();
+                                }
+                            }
+                        };
                         cx.emit(Event::UpdatedEntries(Arc::from([])));
                         cx.notify();
                         while let Some((scan_id, _)) = this.snapshot_subscriptions.front() {
@@ -631,11 +636,7 @@ impl Worktree {
     {
         match self {
             Worktree::Local(this) => this.observe_updates(project_id, cx, callback),
-            Worktree::Remote(this) => {
-                this.update_observer
-                    .lock()
-                    .replace(Box::new(move |update| callback(update).boxed()));
-            }
+            Worktree::Remote(this) => this.observe_updates(project_id, cx, callback),
         }
     }
 
@@ -645,7 +646,7 @@ impl Worktree {
                 this.update_observer.take();
             }
             Worktree::Remote(this) => {
-                this.update_observer.lock().take();
+                this.update_observer.take();
             }
         }
     }
@@ -654,7 +655,7 @@ impl Worktree {
     pub fn has_update_observer(&self) -> bool {
         match self {
             Worktree::Local(this) => this.update_observer.is_some(),
-            Worktree::Remote(this) => this.update_observer.lock().is_some(),
+            Worktree::Remote(this) => this.update_observer.is_some(),
         }
     }
 
@@ -739,24 +740,7 @@ impl Worktree {
     ) -> Option<Task<Result<()>>> {
         match self {
             Worktree::Local(this) => this.delete_entry(entry_id, trash, cx),
-            Worktree::Remote(this) => {
-                let response = this.client.request(proto::DeleteProjectEntry {
-                    project_id: this.project_id,
-                    entry_id: entry_id.to_proto(),
-                    use_trash: trash,
-                });
-                Some(cx.spawn(move |this, mut cx| async move {
-                    let response = response.await?;
-                    this.update(&mut cx, move |worktree, cx| {
-                        worktree.as_remote_mut().unwrap().delete_entry(
-                            entry_id,
-                            response.worktree_scan_id as usize,
-                            cx,
-                        )
-                    })?
-                    .await
-                }))
-            }
+            Worktree::Remote(this) => this.delete_entry(entry_id, trash, cx),
         }
     }
 
@@ -769,36 +753,7 @@ impl Worktree {
         let new_path = new_path.into();
         match self {
             Worktree::Local(this) => this.rename_entry(entry_id, new_path, cx),
-            Worktree::Remote(this) => {
-                let response = this.client.request(proto::RenameProjectEntry {
-                    project_id: this.project_id,
-                    entry_id: entry_id.to_proto(),
-                    new_path: new_path.to_string_lossy().into(),
-                });
-                cx.spawn(move |this, mut cx| async move {
-                    let response = response.await?;
-                    match response.entry {
-                        Some(entry) => this
-                            .update(&mut cx, |this, cx| {
-                                this.as_remote_mut().unwrap().insert_entry(
-                                    entry,
-                                    response.worktree_scan_id as usize,
-                                    cx,
-                                )
-                            })?
-                            .await
-                            .map(CreatedEntry::Included),
-                        None => {
-                            let abs_path = this.update(&mut cx, |worktree, _| {
-                                worktree
-                                    .absolutize(&new_path)
-                                    .with_context(|| format!("absolutizing {new_path:?}"))
-                            })??;
-                            Ok(CreatedEntry::Excluded { abs_path })
-                        }
-                    }
-                })
-            }
+            Worktree::Remote(this) => this.rename_entry(entry_id, new_path, cx),
         }
     }
 
@@ -1825,6 +1780,40 @@ impl RemoteWorktree {
         }
     }
 
+    fn observe_updates<F, Fut>(
+        &mut self,
+        project_id: u64,
+        cx: &mut ModelContext<Worktree>,
+        callback: F,
+    ) where
+        F: 'static + Send + Fn(proto::UpdateWorktree) -> Fut,
+        Fut: 'static + Send + Future<Output = bool>,
+    {
+        let (tx, mut rx) = mpsc::unbounded();
+        let initial_update = self
+            .snapshot
+            .build_initial_update(project_id, self.id().to_proto());
+        self.updates_tx = Some(tx);
+        cx.spawn(|this, mut cx| async move {
+            let mut update = initial_update;
+            loop {
+                if !callback(update).await {
+                    break;
+                }
+                if let Some(next_update) = rx.next().await {
+                    update = next_update;
+                } else {
+                    break;
+                }
+            }
+            this.update(&mut cx, |this, _| {
+                let this = this.as_remote_mut().unwrap();
+                this.updates_tx.take();
+            })
+        })
+        .detach();
+    }
+
     fn observed_snapshot(&self, scan_id: usize) -> bool {
         self.completed_scan_id >= scan_id
     }
@@ -1861,7 +1850,7 @@ impl RemoteWorktree {
             wait_for_snapshot.await?;
             this.update(&mut cx, |worktree, _| {
                 let worktree = worktree.as_remote_mut().unwrap();
-                let mut snapshot = worktree.background_snapshot.lock();
+                let snapshot = &mut worktree.background_snapshot.lock().0;
                 let entry = snapshot.insert_entry(entry);
                 worktree.snapshot = snapshot.clone();
                 entry
@@ -1871,20 +1860,67 @@ impl RemoteWorktree {
 
     fn delete_entry(
         &mut self,
-        id: ProjectEntryId,
-        scan_id: usize,
+        entry_id: ProjectEntryId,
+        trash: bool,
         cx: &mut ModelContext<Worktree>,
-    ) -> Task<Result<()>> {
-        let wait_for_snapshot = self.wait_for_snapshot(scan_id);
+    ) -> Option<Task<Result<()>>> {
+        let response = self.client.request(proto::DeleteProjectEntry {
+            project_id: self.project_id,
+            entry_id: entry_id.to_proto(),
+            use_trash: trash,
+        });
+        Some(cx.spawn(move |this, mut cx| async move {
+            let response = response.await?;
+            let scan_id = response.worktree_scan_id as usize;
+
+            this.update(&mut cx, move |this, _| {
+                this.as_remote_mut().unwrap().wait_for_snapshot(scan_id)
+            })?
+            .await?;
+
+            this.update(&mut cx, |this, _| {
+                let this = this.as_remote_mut().unwrap();
+                let snapshot = &mut this.background_snapshot.lock().0;
+                snapshot.delete_entry(entry_id);
+                this.snapshot = snapshot.clone();
+            })
+        }))
+    }
+
+    fn rename_entry(
+        &mut self,
+        entry_id: ProjectEntryId,
+        new_path: impl Into<Arc<Path>>,
+        cx: &mut ModelContext<Worktree>,
+    ) -> Task<Result<CreatedEntry>> {
+        let new_path = new_path.into();
+        let response = self.client.request(proto::RenameProjectEntry {
+            project_id: self.project_id,
+            entry_id: entry_id.to_proto(),
+            new_path: new_path.to_string_lossy().into(),
+        });
         cx.spawn(move |this, mut cx| async move {
-            wait_for_snapshot.await?;
-            this.update(&mut cx, |worktree, _| {
-                let worktree = worktree.as_remote_mut().unwrap();
-                let mut snapshot = worktree.background_snapshot.lock();
-                snapshot.delete_entry(id);
-                worktree.snapshot = snapshot.clone();
-            })?;
-            Ok(())
+            let response = response.await?;
+            match response.entry {
+                Some(entry) => this
+                    .update(&mut cx, |this, cx| {
+                        this.as_remote_mut().unwrap().insert_entry(
+                            entry,
+                            response.worktree_scan_id as usize,
+                            cx,
+                        )
+                    })?
+                    .await
+                    .map(CreatedEntry::Included),
+                None => {
+                    let abs_path = this.update(&mut cx, |worktree, _| {
+                        worktree
+                            .absolutize(&new_path)
+                            .with_context(|| format!("absolutizing {new_path:?}"))
+                    })??;
+                    Ok(CreatedEntry::Excluded { abs_path })
+                }
+            }
         })
     }
 }
@@ -1912,6 +1948,35 @@ impl Snapshot {
         &self.abs_path
     }
 
+    fn build_initial_update(&self, project_id: u64, worktree_id: u64) -> proto::UpdateWorktree {
+        let mut updated_entries = self
+            .entries_by_path
+            .iter()
+            .map(proto::Entry::from)
+            .collect::<Vec<_>>();
+        updated_entries.sort_unstable_by_key(|e| e.id);
+
+        let mut updated_repositories = self
+            .repository_entries
+            .values()
+            .map(proto::RepositoryEntry::from)
+            .collect::<Vec<_>>();
+        updated_repositories.sort_unstable_by_key(|e| e.work_directory_id);
+
+        proto::UpdateWorktree {
+            project_id,
+            worktree_id,
+            abs_path: self.abs_path().to_string_lossy().into(),
+            root_name: self.root_name().to_string(),
+            updated_entries,
+            removed_entries: Vec::new(),
+            scan_id: self.scan_id as u64,
+            is_last_update: self.completed_scan_id == self.scan_id,
+            updated_repositories,
+            removed_repositories: Vec::new(),
+        }
+    }
+
     pub fn absolutize(&self, path: &Path) -> Result<PathBuf> {
         if path
             .components()
@@ -1978,6 +2043,12 @@ impl Snapshot {
     }
 
     pub(crate) fn apply_remote_update(&mut self, mut update: proto::UpdateWorktree) -> Result<()> {
+        log::trace!(
+            "applying remote worktree update. {} entries updated, {} removed",
+            update.updated_entries.len(),
+            update.removed_entries.len()
+        );
+
         let mut entries_by_path_edits = Vec::new();
         let mut entries_by_id_edits = Vec::new();
 
@@ -2372,35 +2443,6 @@ impl LocalSnapshot {
         }
     }
 
-    fn build_initial_update(&self, project_id: u64, worktree_id: u64) -> proto::UpdateWorktree {
-        let mut updated_entries = self
-            .entries_by_path
-            .iter()
-            .map(proto::Entry::from)
-            .collect::<Vec<_>>();
-        updated_entries.sort_unstable_by_key(|e| e.id);
-
-        let mut updated_repositories = self
-            .repository_entries
-            .values()
-            .map(proto::RepositoryEntry::from)
-            .collect::<Vec<_>>();
-        updated_repositories.sort_unstable_by_key(|e| e.work_directory_id);
-
-        proto::UpdateWorktree {
-            project_id,
-            worktree_id,
-            abs_path: self.abs_path().to_string_lossy().into(),
-            root_name: self.root_name().to_string(),
-            updated_entries,
-            removed_entries: Vec::new(),
-            scan_id: self.scan_id as u64,
-            is_last_update: self.completed_scan_id == self.scan_id,
-            updated_repositories,
-            removed_repositories: Vec::new(),
-        }
-    }
-
     fn insert_entry(&mut self, mut entry: Entry, fs: &dyn Fs) -> Entry {
         if entry.is_file() && entry.path.file_name() == Some(&GITIGNORE) {
             let abs_path = self.abs_path.join(&entry.path);
@@ -2999,9 +3041,9 @@ impl language::File for File {
         self
     }
 
-    fn to_proto(&self) -> rpc::proto::File {
+    fn to_proto(&self, cx: &AppContext) -> rpc::proto::File {
         rpc::proto::File {
-            worktree_id: self.worktree.entity_id().as_u64(),
+            worktree_id: self.worktree.read(cx).id().to_proto(),
             entry_id: self.entry_id.map(|id| id.to_proto()),
             path: self.path.to_string_lossy().into(),
             mtime: self.mtime.map(|time| time.into()),

crates/zed/Cargo.toml 🔗

@@ -79,6 +79,7 @@ quick_action_bar.workspace = true
 recent_projects.workspace = true
 dev_server_projects.workspace = true
 release_channel.workspace = true
+remote.workspace = true
 repl.workspace = true
 rope.workspace = true
 search.workspace = true
@@ -96,6 +97,7 @@ telemetry_events.workspace = true
 terminal_view.workspace = true
 theme.workspace = true
 theme_selector.workspace = true
+url.workspace = true
 urlencoding = "2.1.2"
 ui.workspace = true
 util.workspace = true

crates/zed/src/main.rs 🔗

@@ -46,7 +46,7 @@ use welcome::{show_welcome_view, BaseKeymap, FIRST_OPEN};
 use workspace::{AppState, WorkspaceSettings, WorkspaceStore};
 use zed::{
     app_menus, build_window_options, handle_cli_connection, handle_keymap_file_changes,
-    initialize_workspace, open_paths_with_positions, OpenListener, OpenRequest,
+    initialize_workspace, open_paths_with_positions, open_ssh_paths, OpenListener, OpenRequest,
 };
 
 use crate::zed::inline_completion_registry;
@@ -520,6 +520,21 @@ fn handle_open_request(request: OpenRequest, app_state: Arc<AppState>, cx: &mut
         return;
     };
 
+    if let Some(connection_info) = request.ssh_connection {
+        cx.spawn(|mut cx| async move {
+            open_ssh_paths(
+                connection_info,
+                request.open_paths,
+                app_state,
+                workspace::OpenOptions::default(),
+                &mut cx,
+            )
+            .await
+        })
+        .detach_and_log_err(cx);
+        return;
+    }
+
     let mut task = None;
     if !request.open_paths.is_empty() {
         let app_state = app_state.clone();
@@ -890,7 +905,10 @@ fn parse_url_arg(arg: &str, cx: &AppContext) -> Result<String> {
     match std::fs::canonicalize(Path::new(&arg)) {
         Ok(path) => Ok(format!("file://{}", path.to_string_lossy())),
         Err(error) => {
-            if arg.starts_with("file://") || arg.starts_with("zed-cli://") {
+            if arg.starts_with("file://")
+                || arg.starts_with("zed-cli://")
+                || arg.starts_with("ssh://")
+            {
                 Ok(arg.into())
             } else if let Some(_) = parse_zed_link(&arg, cx) {
                 Ok(arg.into())

crates/zed/src/zed.rs 🔗

@@ -5,6 +5,7 @@ pub(crate) mod linux_prompts;
 #[cfg(not(target_os = "linux"))]
 pub(crate) mod only_instance;
 mod open_listener;
+mod password_prompt;
 
 pub use app_menus::*;
 use breadcrumbs::Breadcrumbs;

crates/zed/src/zed/open_listener.rs 🔗

@@ -1,4 +1,6 @@
+use crate::{handle_open_request, init_headless, init_ui, zed::password_prompt::PasswordPrompt};
 use anyhow::{anyhow, Context, Result};
+use auto_update::AutoUpdater;
 use cli::{ipc, IpcHandshake};
 use cli::{ipc::IpcSender, CliRequest, CliResponse};
 use client::parse_zed_link;
@@ -9,8 +11,12 @@ use editor::Editor;
 use futures::channel::mpsc::{UnboundedReceiver, UnboundedSender};
 use futures::channel::{mpsc, oneshot};
 use futures::{FutureExt, SinkExt, StreamExt};
-use gpui::{AppContext, AsyncAppContext, Global, WindowHandle};
+use gpui::{
+    AppContext, AsyncAppContext, Global, SemanticVersion, VisualContext as _, WindowHandle,
+};
 use language::{Bias, Point};
+use release_channel::{AppVersion, ReleaseChannel};
+use remote::SshPlatform;
 use std::path::Path;
 use std::path::PathBuf;
 use std::sync::Arc;
@@ -22,14 +28,21 @@ use welcome::{show_welcome_view, FIRST_OPEN};
 use workspace::item::ItemHandle;
 use workspace::{AppState, Workspace};
 
-use crate::{handle_open_request, init_headless, init_ui};
-
 #[derive(Default, Debug)]
 pub struct OpenRequest {
     pub cli_connection: Option<(mpsc::Receiver<CliRequest>, IpcSender<CliResponse>)>,
     pub open_paths: Vec<PathLikeWithPosition<PathBuf>>,
     pub open_channel_notes: Vec<(u64, Option<String>)>,
     pub join_channel: Option<u64>,
+    pub ssh_connection: Option<SshConnectionInfo>,
+}
+
+#[derive(Debug, PartialEq, Eq)]
+pub struct SshConnectionInfo {
+    pub username: String,
+    pub password: Option<String>,
+    pub host: String,
+    pub port: u16,
 }
 
 impl OpenRequest {
@@ -42,6 +55,8 @@ impl OpenRequest {
                 this.parse_file_path(file)
             } else if let Some(file) = url.strip_prefix("zed://file") {
                 this.parse_file_path(file)
+            } else if url.starts_with("ssh://") {
+                this.parse_ssh_file_path(&url)?
             } else if let Some(request_path) = parse_zed_link(&url, cx) {
                 this.parse_request_path(request_path).log_err();
             } else {
@@ -62,6 +77,37 @@ impl OpenRequest {
         }
     }
 
+    fn parse_ssh_file_path(&mut self, file: &str) -> Result<()> {
+        let url = url::Url::parse(file)?;
+        let host = url
+            .host()
+            .ok_or_else(|| anyhow!("missing host in ssh url: {}", file))?
+            .to_string();
+        let username = url.username().to_string();
+        if username.is_empty() {
+            return Err(anyhow!("missing username in ssh url: {}", file));
+        }
+        let password = url.password().map(|s| s.to_string());
+        let port = url.port().unwrap_or(22);
+        if !self.open_paths.is_empty() {
+            return Err(anyhow!("cannot open both local and ssh paths"));
+        }
+        let connection = SshConnectionInfo {
+            username,
+            password,
+            host,
+            port,
+        };
+        if let Some(ssh_connection) = &self.ssh_connection {
+            if *ssh_connection != connection {
+                return Err(anyhow!("cannot open multiple ssh connections"));
+            }
+        }
+        self.ssh_connection = Some(connection);
+        self.parse_file_path(url.path());
+        Ok(())
+    }
+
     fn parse_request_path(&mut self, request_path: &str) -> Result<()> {
         let mut parts = request_path.split('/');
         if parts.next() == Some("channel") {
@@ -109,6 +155,95 @@ impl OpenListener {
     }
 }
 
+struct SshClientDelegate {
+    window: WindowHandle<Workspace>,
+    known_password: Option<String>,
+}
+
+impl remote::SshClientDelegate for SshClientDelegate {
+    fn ask_password(
+        &self,
+        prompt: String,
+        cx: &mut AsyncAppContext,
+    ) -> oneshot::Receiver<Result<String>> {
+        let (tx, rx) = oneshot::channel();
+        let mut known_password = self.known_password.clone();
+        self.window
+            .update(cx, |workspace, cx| {
+                cx.activate_window();
+                if let Some(password) = known_password.take() {
+                    tx.send(Ok(password)).ok();
+                } else {
+                    workspace.toggle_modal(cx, |cx| PasswordPrompt::new(prompt, tx, cx));
+                }
+            })
+            .unwrap();
+        rx
+    }
+
+    fn get_server_binary(
+        &self,
+        platform: SshPlatform,
+        cx: &mut AsyncAppContext,
+    ) -> oneshot::Receiver<Result<(PathBuf, SemanticVersion)>> {
+        let (tx, rx) = oneshot::channel();
+        cx.spawn(|mut cx| async move {
+            tx.send(get_server_binary(platform, &mut cx).await).ok();
+        })
+        .detach();
+        rx
+    }
+
+    fn remote_server_binary_path(&self, cx: &mut AsyncAppContext) -> Result<PathBuf> {
+        let release_channel = cx.update(|cx| ReleaseChannel::global(cx))?;
+        Ok(format!(".local/zed-remote-server-{}", release_channel.dev_name()).into())
+    }
+}
+
+async fn get_server_binary(
+    platform: SshPlatform,
+    cx: &mut AsyncAppContext,
+) -> Result<(PathBuf, SemanticVersion)> {
+    let (version, release_channel) =
+        cx.update(|cx| (AppVersion::global(cx), ReleaseChannel::global(cx)))?;
+
+    // In dev mode, build the remote server binary from source
+    #[cfg(debug_assertions)]
+    if crate::stdout_is_a_pty()
+        && release_channel == ReleaseChannel::Dev
+        && platform.arch == std::env::consts::ARCH
+        && platform.os == std::env::consts::OS
+    {
+        use smol::process::{Command, Stdio};
+
+        log::info!("building remote server binary from source");
+        run_cmd(Command::new("cargo").args(["build", "--package", "remote_server"])).await?;
+        run_cmd(Command::new("strip").args(["target/debug/remote_server"])).await?;
+        run_cmd(Command::new("gzip").args(["-9", "-f", "target/debug/remote_server"])).await?;
+
+        let path = std::env::current_dir()?.join("target/debug/remote_server.gz");
+        return Ok((path, version));
+
+        async fn run_cmd(command: &mut Command) -> Result<()> {
+            let output = command.stderr(Stdio::inherit()).output().await?;
+            if !output.status.success() {
+                Err(anyhow!("failed to run command: {:?}", command))?;
+            }
+            Ok(())
+        }
+    }
+
+    let binary_path = AutoUpdater::get_latest_remote_server_release(
+        platform.os,
+        platform.arch,
+        release_channel,
+        cx,
+    )
+    .await?;
+
+    Ok((binary_path, version))
+}
+
 #[cfg(target_os = "linux")]
 pub fn listen_for_cli_connections(opener: OpenListener) -> Result<()> {
     use release_channel::RELEASE_CHANNEL_NAME;
@@ -160,6 +295,72 @@ fn connect_to_cli(
     Ok((async_request_rx, response_tx))
 }
 
+pub async fn open_ssh_paths(
+    connection_info: SshConnectionInfo,
+    paths: Vec<PathLikeWithPosition<PathBuf>>,
+    app_state: Arc<AppState>,
+    _open_options: workspace::OpenOptions,
+    cx: &mut AsyncAppContext,
+) -> Result<()> {
+    let options = cx.update(|cx| (app_state.build_window_options)(None, cx))?;
+    let window = cx.open_window(options, |cx| {
+        let project = project::Project::local(
+            app_state.client.clone(),
+            app_state.node_runtime.clone(),
+            app_state.user_store.clone(),
+            app_state.languages.clone(),
+            app_state.fs.clone(),
+            cx,
+        );
+        cx.new_view(|cx| Workspace::new(None, project, app_state.clone(), cx))
+    })?;
+
+    let session = remote::SshSession::client(
+        connection_info.username,
+        connection_info.host,
+        connection_info.port,
+        Arc::new(SshClientDelegate {
+            window,
+            known_password: connection_info.password,
+        }),
+        cx,
+    )
+    .await;
+
+    if session.is_err() {
+        window.update(cx, |_, cx| cx.remove_window()).ok();
+    }
+
+    let session = session?;
+
+    let project = cx.update(|cx| {
+        project::Project::ssh(
+            session,
+            app_state.client.clone(),
+            app_state.node_runtime.clone(),
+            app_state.user_store.clone(),
+            app_state.languages.clone(),
+            app_state.fs.clone(),
+            cx,
+        )
+    })?;
+
+    for path in paths {
+        project
+            .update(cx, |project, cx| {
+                project.find_or_create_worktree(&path.path_like, true, cx)
+            })?
+            .await?;
+    }
+
+    window.update(cx, |_, cx| {
+        cx.replace_root_view(|cx| Workspace::new(None, project, app_state, cx))
+    })?;
+    window.update(cx, |_, cx| cx.activate_window())?;
+
+    Ok(())
+}
+
 pub async fn open_paths_with_positions(
     path_likes: &Vec<PathLikeWithPosition<PathBuf>>,
     app_state: Arc<AppState>,

crates/zed/src/zed/password_prompt.rs 🔗

@@ -0,0 +1,69 @@
+use anyhow::Result;
+use editor::Editor;
+use futures::channel::oneshot;
+use gpui::{
+    px, DismissEvent, EventEmitter, FocusableView, ParentElement as _, Render, SharedString, View,
+};
+use ui::{v_flex, InteractiveElement, Label, Styled, StyledExt as _, ViewContext, VisualContext};
+use workspace::ModalView;
+
+pub struct PasswordPrompt {
+    prompt: SharedString,
+    tx: Option<oneshot::Sender<Result<String>>>,
+    editor: View<Editor>,
+}
+
+impl PasswordPrompt {
+    pub fn new(
+        prompt: String,
+        tx: oneshot::Sender<Result<String>>,
+        cx: &mut ViewContext<Self>,
+    ) -> Self {
+        Self {
+            prompt: SharedString::from(prompt),
+            tx: Some(tx),
+            editor: cx.new_view(|cx| {
+                let mut editor = Editor::single_line(cx);
+                editor.set_redact_all(true, cx);
+                editor
+            }),
+        }
+    }
+
+    fn confirm(&mut self, _: &menu::Confirm, cx: &mut ViewContext<Self>) {
+        let text = self.editor.read(cx).text(cx);
+        if let Some(tx) = self.tx.take() {
+            tx.send(Ok(text)).ok();
+        };
+        cx.emit(DismissEvent)
+    }
+
+    fn dismiss(&mut self, _: &menu::Cancel, cx: &mut ViewContext<Self>) {
+        cx.emit(DismissEvent)
+    }
+}
+
+impl Render for PasswordPrompt {
+    fn render(&mut self, cx: &mut ui::ViewContext<Self>) -> impl ui::IntoElement {
+        v_flex()
+            .key_context("PasswordPrompt")
+            .elevation_3(cx)
+            .p_4()
+            .gap_2()
+            .on_action(cx.listener(Self::dismiss))
+            .on_action(cx.listener(Self::confirm))
+            .w(px(400.))
+            .child(Label::new(self.prompt.clone()))
+            .child(self.editor.clone())
+    }
+}
+
+impl FocusableView for PasswordPrompt {
+    fn focus_handle(&self, cx: &gpui::AppContext) -> gpui::FocusHandle {
+        self.editor.focus_handle(cx)
+    }
+}
+
+impl EventEmitter<DismissEvent> for PasswordPrompt {}
+
+impl ModalView for PasswordPrompt {}

script/bundle-linux 🔗

@@ -43,12 +43,13 @@ script/generate-licenses
 
 # Build binary in release mode
 export RUSTFLAGS="-C link-args=-Wl,--disable-new-dtags,-rpath,\$ORIGIN/../lib"
-cargo build --release --target "${target_triple}" --package zed --package cli
+cargo build --release --target "${target_triple}" --package zed --package cli --package remote_server
 
 # Strip the binary of all debug symbols
 # Later, we probably want to do something like this: https://github.com/GabrielMajeri/separate-symbols
 strip --strip-debug "${target_dir}/${target_triple}/release/zed"
 strip --strip-debug "${target_dir}/${target_triple}/release/cli"
+strip --strip-debug "${target_dir}/${target_triple}/release/remote_server"
 
 suffix=""
 if [ "$channel" != "stable" ]; then
@@ -104,7 +105,8 @@ envsubst < "crates/zed/resources/zed.desktop.in" > "${zed_dir}/share/application
 cp "assets/licenses.md" "${zed_dir}/licenses.md"
 
 # Create archive out of everything that's in the temp directory
-target="linux-$(uname -m)"
+arch=$(uname -m)
+target="linux-${arch}"
 if  [[ "$channel" == "dev" ]]; then
   archive="zed-${commit}-${target}.tar.gz"
 else
@@ -115,3 +117,5 @@ rm -rf "${archive}"
 remove_match="zed(-[a-zA-Z0-9]+)?-linux-$(uname -m)\.tar\.gz"
 ls "${target_dir}/release" | grep -E ${remove_match} | xargs -d "\n" -I {} rm -f "${target_dir}/release/{}" || true
 tar -czvf "${target_dir}/release/$archive" -C ${temp_dir} "zed$suffix.app"
+
+gzip --stdout --best "${target_dir}/${target_triple}/release/remote_server" > "${target_dir}/zed-remote-server-linux-${arch}.gz"

script/bundle-mac 🔗

@@ -10,8 +10,10 @@ local_arch=false
 local_only=false
 local_install=false
 bundle_name=""
+can_code_sign=false
 
 # This must match the team in the provisioning profile.
+IDENTITY="Zed Industries, Inc."
 APPLE_NOTORIZATION_TEAM="MQ55VZLNZQ"
 
 # Function for displaying help info
@@ -78,10 +80,10 @@ local_target_triple=${host_line#*: }
 
 if [ "$local_arch" = true ]; then
     echo "Building for local target only."
-    cargo build ${build_flag} --package zed --package cli
+    cargo build ${build_flag} --package zed --package cli --package remote_server
 else
     echo "Compiling zed binaries"
-    cargo build ${build_flag} --package zed --package cli --target aarch64-apple-darwin --target x86_64-apple-darwin
+    cargo build ${build_flag} --package zed --package cli --package remote_server --target aarch64-apple-darwin --target x86_64-apple-darwin
 fi
 
 echo "Creating application bundle"
@@ -108,6 +110,27 @@ mv Cargo.toml.backup Cargo.toml
 popd
 echo "Bundled ${app_path}"
 
+if [[ -n "${MACOS_CERTIFICATE:-}" && -n "${MACOS_CERTIFICATE_PASSWORD:-}" && -n "${APPLE_NOTARIZATION_USERNAME:-}" && -n "${APPLE_NOTARIZATION_PASSWORD:-}" ]]; then
+    can_code_sign=true
+
+    echo "Setting up keychain for code signing..."
+    security create-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain || echo ""
+    security default-keychain -s zed.keychain
+    security unlock-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
+    echo "$MACOS_CERTIFICATE" | base64 --decode > /tmp/zed-certificate.p12
+    security import /tmp/zed-certificate.p12 -k zed.keychain -P "$MACOS_CERTIFICATE_PASSWORD" -T /usr/bin/codesign
+    rm /tmp/zed-certificate.p12
+    security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
+
+    function cleanup() {
+        echo "Cleaning up keychain"
+        security default-keychain -s login.keychain
+        security delete-keychain zed.keychain
+    }
+
+    trap cleanup EXIT
+fi
+
 GIT_VERSION="v2.43.3"
 GIT_VERSION_SHA="fa29823"
 
@@ -165,7 +188,7 @@ function prepare_binaries() {
     local architecture=$1
     local app_path=$2
 
-    echo "Unpacking dSYMs for $architecture"
+    echo "Unpacking dSYMs for $ architecture"
     dsymutil --flat target/${architecture}/${target_dir}/Zed
     version="$(cargo metadata --no-deps --manifest-path crates/zed/Cargo.toml --offline --format-version=1 | jq -r '.packages | map(select(.name == "zed"))[0].version')"
     if [ "$channel" == "nightly" ]; then
@@ -188,7 +211,7 @@ function prepare_binaries() {
     cp target/${architecture}/${target_dir}/cli "${app_path}/Contents/MacOS/cli"
 }
 
-function sign_binaries() {
+function sign_app_binaries() {
     local app_path=$1
     local architecture=$2
     local architecture_dir=$3
@@ -207,24 +230,14 @@ function sign_binaries() {
     # Note: The app identifier for our development builds is the same as the app identifier for nightly.
     cp crates/zed/contents/$channel/embedded.provisionprofile "${app_path}/Contents/"
 
-    if [[ -n "${MACOS_CERTIFICATE:-}" && -n "${MACOS_CERTIFICATE_PASSWORD:-}" && -n "${APPLE_NOTARIZATION_USERNAME:-}" && -n "${APPLE_NOTARIZATION_PASSWORD:-}" ]]; then
-        echo "Signing bundle with Apple-issued certificate"
-        security create-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain || echo ""
-        security default-keychain -s zed.keychain
-        security unlock-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
-        echo "$MACOS_CERTIFICATE" | base64 --decode > /tmp/zed-certificate.p12
-        security import /tmp/zed-certificate.p12 -k zed.keychain -P "$MACOS_CERTIFICATE_PASSWORD" -T /usr/bin/codesign
-        rm /tmp/zed-certificate.p12
-        security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
-
+    if [[ $can_code_sign = true ]]; then
+        echo "Code signing binaries"
         # sequence of codesign commands modeled after this example: https://developer.apple.com/forums/thread/701514
-        /usr/bin/codesign --deep --force --timestamp --sign "Zed Industries, Inc." "${app_path}/Contents/Frameworks/WebRTC.framework" -v
-        /usr/bin/codesign --deep --force --timestamp --options runtime --sign "Zed Industries, Inc." "${app_path}/Contents/MacOS/cli" -v
-        /usr/bin/codesign --deep --force --timestamp --options runtime --sign "Zed Industries, Inc." "${app_path}/Contents/MacOS/git" -v
-        /usr/bin/codesign --deep --force --timestamp --options runtime --entitlements crates/zed/resources/zed.entitlements --sign "Zed Industries, Inc." "${app_path}/Contents/MacOS/zed" -v
-        /usr/bin/codesign --force --timestamp --options runtime --entitlements crates/zed/resources/zed.entitlements --sign "Zed Industries, Inc." "${app_path}" -v
-
-        security default-keychain -s login.keychain
+        /usr/bin/codesign --deep --force --timestamp --sign "$IDENTITY" "${app_path}/Contents/Frameworks/WebRTC.framework" -v
+        /usr/bin/codesign --deep --force --timestamp --options runtime --sign "$IDENTITY" "${app_path}/Contents/MacOS/cli" -v
+        /usr/bin/codesign --deep --force --timestamp --options runtime --sign "$IDENTITY" "${app_path}/Contents/MacOS/git" -v
+        /usr/bin/codesign --deep --force --timestamp --options runtime --entitlements crates/zed/resources/zed.entitlements --sign "$IDENTITY" "${app_path}/Contents/MacOS/zed" -v
+        /usr/bin/codesign --force --timestamp --options runtime --entitlements crates/zed/resources/zed.entitlements --sign "$IDENTITY" "${app_path}" -v
     else
         echo "One or more of the following variables are missing: MACOS_CERTIFICATE, MACOS_CERTIFICATE_PASSWORD, APPLE_NOTARIZATION_USERNAME, APPLE_NOTARIZATION_PASSWORD"
         if [[ "$local_only" = false ]]; then
@@ -291,20 +304,13 @@ function sign_binaries() {
         mkdir -p ${dmg_source_directory}
         mv "${app_path}" "${dmg_source_directory}"
 
-        if [[ -n $MACOS_CERTIFICATE && -n $MACOS_CERTIFICATE_PASSWORD && -n $APPLE_NOTARIZATION_USERNAME && -n $APPLE_NOTARIZATION_PASSWORD ]]; then
+        if [[ $can_code_sign = true ]]; then
             echo "Creating temporary DMG at ${dmg_file_path} using ${dmg_source_directory} to notarize app bundle"
             hdiutil create -volname Zed -srcfolder "${dmg_source_directory}" -ov -format UDZO "${dmg_file_path}"
 
-            security create-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain || echo ""
-            security default-keychain -s zed.keychain
-            security unlock-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
-            echo "$MACOS_CERTIFICATE" | base64 --decode > /tmp/zed-certificate.p12
-            security import /tmp/zed-certificate.p12 -k zed.keychain -P "$MACOS_CERTIFICATE_PASSWORD" -T /usr/bin/codesign
-            rm /tmp/zed-certificate.p12
-            security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
+            echo "Code-signing DMG"
+            /usr/bin/codesign --deep --force --timestamp --options runtime --sign "$IDENTITY" "$(pwd)/${dmg_file_path}" -v
 
-            /usr/bin/codesign --deep --force --timestamp --options runtime --sign "Zed Industries, Inc." "$(pwd)/${dmg_file_path}" -v
-            security default-keychain -s login.keychain
             echo "Notarizing DMG with Apple"
             "${xcode_bin_dir_path}/notarytool" submit --wait --apple-id "$APPLE_NOTARIZATION_USERNAME" --password "$APPLE_NOTARIZATION_PASSWORD" --team-id "$APPLE_NOTORIZATION_TEAM" "${dmg_file_path}"
 
@@ -330,17 +336,9 @@ function sign_binaries() {
         npm install --global dmg-license minimist
         dmg-license script/eula/eula.json "${dmg_file_path}"
 
-        if [[ -n $MACOS_CERTIFICATE && -n $MACOS_CERTIFICATE_PASSWORD && -n $APPLE_NOTARIZATION_USERNAME && -n $APPLE_NOTARIZATION_PASSWORD ]]; then
+        if [[ $can_code_sign = true ]]; then
             echo "Notarizing DMG with Apple"
-            security create-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain || echo ""
-            security default-keychain -s zed.keychain
-            security unlock-keychain -p "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
-            echo "$MACOS_CERTIFICATE" | base64 --decode > /tmp/zed-certificate.p12
-            security import /tmp/zed-certificate.p12 -k zed.keychain -P "$MACOS_CERTIFICATE_PASSWORD" -T /usr/bin/codesign
-            rm /tmp/zed-certificate.p12
-            security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k "$MACOS_CERTIFICATE_PASSWORD" zed.keychain
-            /usr/bin/codesign --deep --force --timestamp --options runtime --sign "Zed Industries, Inc." "$(pwd)/${dmg_file_path}" -v
-            security default-keychain -s login.keychain
+            /usr/bin/codesign --deep --force --timestamp --options runtime --sign "$IDENTITY" "$(pwd)/${dmg_file_path}" -v
             "${xcode_bin_dir_path}/notarytool" submit --wait --apple-id "$APPLE_NOTARIZATION_USERNAME" --password "$APPLE_NOTARIZATION_PASSWORD" --team-id "$APPLE_NOTORIZATION_TEAM" "${dmg_file_path}"
             "${xcode_bin_dir_path}/stapler" staple "${dmg_file_path}"
         fi
@@ -351,8 +349,19 @@ function sign_binaries() {
     fi
 }
 
+function sign_binary() {
+    local binary_path=$1
+
+    if [[ $can_code_sign = true ]]; then
+        echo "Code signing executable $binary_path"
+        /usr/bin/codesign --deep --force --timestamp --options runtime --sign "$IDENTITY" "${app_path}" -v
+    fi
+}
+
 if [ "$local_arch" = true ]; then
-    sign_binaries "$app_path" "$local_target_triple" "$local_target_triple"
+    sign_app_binaries "$app_path" "$local_target_triple" "$local_target_triple"
+
+    sign_binary "target/release/remote_server"
 else
     # Create universal binary
     prepare_binaries "aarch64-apple-darwin" "$app_path_aarch64"
@@ -370,8 +379,13 @@ else
         target/{x86_64-apple-darwin,aarch64-apple-darwin}/${target_dir}/cli \
         -output \
         "${app_path}/Contents/MacOS/cli"
-    sign_binaries "$app_path" "universal" "."
 
-    sign_binaries "$app_path_x64" "x86_64-apple-darwin" "x86_64-apple-darwin"
-    sign_binaries "$app_path_aarch64" "aarch64-apple-darwin" "aarch64-apple-darwin"
+    sign_app_binaries "$app_path" "universal" "."
+    sign_app_binaries "$app_path_x64" "x86_64-apple-darwin" "x86_64-apple-darwin"
+    sign_app_binaries "$app_path_aarch64" "aarch64-apple-darwin" "aarch64-apple-darwin"
+
+    sign_binary "target/x86_64-apple-darwin/release/remote_server"
+    sign_binary "target/aarch64-apple-darwin/release/remote_server"
+    gzip --stdout --best target/x86_64-apple-darwin/release/remote_server > target/zed-remote-server-mac-x86_64.gz
+    gzip --stdout --best target/aarch64-apple-darwin/release/remote_server > target/zed-remote-server-mac-aarch64.gz
 fi

script/upload-nightly 🔗

@@ -33,6 +33,12 @@ bucket_name="zed-nightly-host"
 
 sha=$(git rev-parse HEAD)
 echo ${sha} > target/latest-sha
+
+find target -type f -name "zed-remote-server-*.gz" -print0 | while IFS= read -r -d '' file_to_upload; do
+    upload_to_blob_store $bucket_name "$file_to_upload" "nightly/$(basename "$file_to_upload")"
+    rm -f "$file_to_upload"
+done
+
 case "$target" in
     macos)
         upload_to_blob_store $bucket_name "target/aarch64-apple-darwin/release/Zed.dmg" "nightly/Zed-aarch64.dmg"
@@ -43,9 +49,9 @@ case "$target" in
         rm -f "target/latest-sha"
         ;;
     linux-targz)
-        find . -type f -name "zed-*.tar.gz" -print0 | while IFS= read -r -d '' bundle_file; do
-            upload_to_blob_store $bucket_name "$bundle_file" "nightly/$(basename "$bundle_file")"
-            rm -f "$bundle_file"
+        find . -type f -name "zed-*.tar.gz" -print0 | while IFS= read -r -d '' file_to_upload; do
+            upload_to_blob_store $bucket_name "$file_to_upload" "nightly/$(basename "$file_to_upload")"
+            rm -f "$file_to_upload"
         done
         upload_to_blob_store $bucket_name "target/latest-sha" "nightly/latest-sha-linux-targz"
         rm -f "target/latest-sha"