Detailed changes
@@ -0,0 +1,84 @@
+# Crash Fix
+
+You are fixing a crash that has been analyzed and has a reproduction test case. Your goal is to implement a minimal, correct fix that resolves the root cause and makes the reproduction test pass.
+
+## Inputs
+
+Before starting, you should have:
+
+1. **ANALYSIS.md** — the crash analysis from the investigation phase. Read it thoroughly.
+2. **A failing test** — a reproduction test that triggers the crash. Run it first to confirm it fails as expected.
+
+If either is missing, ask the user to provide them or run the investigation phase first (`/prompt crash/investigate`).
+
+## Workflow
+
+### Step 1: Confirm the Failing Test
+
+Run the reproduction test and verify it fails with the expected crash:
+
+```
+cargo test -p <crate> <test_name>
+```
+
+Read the failure output. Confirm the panic message and stack trace match what ANALYSIS.md describes. If the test doesn't fail, or fails differently than expected, stop and reassess before proceeding.
+
+### Step 2: Understand the Fix
+
+Read the "Suggested Fix" section of ANALYSIS.md and the relevant source code. Before writing any code, be clear on:
+
+1. **What invariant is being violated** — what property of the data does the crashing code assume?
+2. **Where the invariant breaks** — which function produces the bad state?
+
+### Step 3: Implement the Fix
+
+Apply the minimal change needed to resolve the root cause. Guidelines:
+
+- **Fix the root cause, not the symptom.** Don't just catch the panic with a bounds check if the real problem is an incorrect offset calculation. Fix the calculation.
+- **Preserve existing behavior** for all non-crashing cases. The fix should only change what happens in the scenario that was previously crashing.
+- **Don't add unnecessary changes.** No drive-by improvements, keep the diff focused.
+- **Add a comment only if the fix is non-obvious.** If a reader might wonder "why is this check here?", a brief comment explaining the crash scenario is appropriate.
+- **Consider long term maintainability** Please make a targeted fix while being sure to consider the long term maintainability and reliability of the codebase
+
+### Step 4: Verify the Fix
+
+Run the reproduction test and confirm it passes:
+
+```
+cargo test -p <crate> <test_name>
+```
+
+Then run the full test suite for the affected crate to check for regressions:
+
+```
+cargo test -p <crate>
+```
+
+If any tests fail, determine whether the fix introduced a regression. Fix regressions before proceeding.
+
+### Step 5: Run Clippy
+
+```
+./script/clippy
+```
+
+Address any new warnings introduced by your change.
+
+### Step 6: Summarize
+
+Write a brief summary of the fix for use in a PR description. Include:
+
+- **What was the bug** — one sentence on the root cause.
+- **What the fix does** — one sentence on the change.
+- **How it was verified** — note that the reproduction test now passes.
+- **Sentry issue link** — if available from ANALYSIS.md.
+
+We use the following template for pull request descriptions. Please add information to answer the relevant sections, especially for release notes.
+
+```
+<Description of change, what the issue was and the fix.>
+
+Release Notes:
+
+- N/A *or* Added/Fixed/Improved ...
+```
@@ -0,0 +1,89 @@
+# Crash Investigation
+
+You are investigating a crash that was observed in the wild. Your goal is to understand the root cause and produce a minimal reproduction test case that triggers the same crash. This test will be used to verify a fix and prevent regressions.
+
+## Workflow
+
+### Step 1: Get the Crash Report
+
+If given a Sentry issue ID (like `ZED-4VS` or a numeric ID), there are several ways to fetch the crash data:
+
+**Option A: Sentry MCP server (preferred if available)**
+If the Sentry MCP server is configured as a context server, use its tools directly (e.g., `get_sentry_issue`) to fetch the issue details and stack trace. This is the simplest path — no tokens or scripts needed.
+
+**Option B: Fetch script**
+Run the fetch script from the terminal:
+
+```
+script/sentry-fetch <issue-id>
+```
+
+This reads authentication from `~/.sentryclirc` (set up via `sentry-cli login`) or the `SENTRY_AUTH_TOKEN` environment variable.
+
+**Option C: Crash report provided directly**
+If the crash report was provided inline or as a file, read it carefully before proceeding.
+
+### Step 2: Analyze the Stack Trace
+
+Read the stack trace bottom-to-top (from crash site upward) and identify:
+
+1. **The crash site** — the exact function and line where the panic/abort occurs.
+2. **The immediate cause** — what operation failed (e.g., slice indexing on a non-char-boundary, out-of-bounds access, unwrap on None).
+3. **The relevant application frames** — filter out crash handler, signal handler, parking_lot, and stdlib frames. Focus on frames marked "(In app)".
+4. **The data flow** — trace how the invalid data reached the crash site. What computed the bad index, the None value, etc.?
+
+Find the relevant source files in the repository and read them. Pay close attention to:
+- The crashing function and its callers
+- How inputs to the crashing operation are computed
+- Any assumptions the code makes about its inputs (string encoding, array lengths, option values)
+
+### Step 3: Identify the Root Cause
+
+Work backwards from the crash site to determine **what sequence of events or data conditions** produces the invalid state.
+
+Ask yourself: *What user action or sequence of actions could lead to this state?* The crash came from a real user, so there is some natural usage pattern that triggers it.
+
+### Step 4: Write a Reproduction Test
+
+Write a minimal test case that:
+
+1. **Mimics user actions** rather than constructing corrupt state directly. Work from the top down: what does the user do (open a file, type text, trigger a completion, etc.) that eventually causes the internal state to become invalid?
+2. **Exercises the same code path** as the crash. The test should fail in the same function with the same kind of error (e.g., same panic message pattern).
+3. **Is minimal** — include only what's necessary to trigger the crash. Remove anything that isn't load-bearing.
+4. **Lives in the right place** — add the test to the existing test module of the crate where the bug lives. Follow the existing test patterns in that module.
+5. **Avoid overly verbose comments** - the test should be self-explanatory and concise. More detailed descriptions of the test can go in ANALYSIS.md (see the next section).
+
+When the test fails, its stack trace should share the key application frames from the original crash report. The outermost frames (crash handler, signal handling) will differ since we're in a test environment — that's expected.
+
+If you can't reproduce the exact crash but can demonstrate the same class of bug (e.g., same function panicking with a similar invalid input), that is still valuable. Note the difference in your analysis.
+
+### Step 5: Write the Analysis
+
+Create an `ANALYSIS.md` file (in the working directory root, or wherever instructed) with these sections:
+
+```markdown
+# Crash Analysis: <short description>
+
+## Crash Summary
+- **Sentry Issue:** <ID and link if available>
+- **Error:** <the panic/error message>
+- **Crash Site:** <function name and file>
+
+## Root Cause
+<Explain what goes wrong and why. Be specific about the data flow.>
+
+## Reproduction
+<Describe what the test does and how it triggers the same crash.
+Include the exact command to run the test, e.g.:
+`cargo test -p <crate> <test_name>`>
+
+## Suggested Fix
+<Describe the fix approach. Be specific: which function, what check to add,
+what computation to change. If there are multiple options, list them with tradeoffs.>
+```
+
+## Guidelines
+
+- **Don't guess.** If you're unsure about a code path, read the source. Use `grep` to find relevant functions, types, and call sites.
+- **Check the git history.** If the crash appeared in a specific version, `git log` on the relevant files may reveal a recent change that introduced the bug.
+- **Look at existing tests.** The crate likely has tests that show how to set up the relevant subsystem. Follow those patterns rather than inventing new test infrastructure.
@@ -1,8 +1,9 @@
Closes #ISSUE
-- [ ] Tests or screenshots needed?
-- [ ] Code Reviewed
-- [ ] Manual QA
+Before you mark this PR as ready for review, make sure that you have:
+- [ ] Added a solid test coverage and/or screenshots from doing manual testing
+- [ ] Done a self-review taking into account security and performance aspects
+- [ ] Aligned any UI changes with the [UI checklist](https://github.com/zed-industries/zed/blob/main/CONTRIBUTING.md#uiux-checklist)
Release Notes:
@@ -22,15 +22,15 @@ jobs:
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: >
- Hi there!
Zed development moves fast and a significant number of bugs become outdated.
- If you can reproduce this bug on the latest stable Zed, please let us know by leaving a comment with the Zed version.
+ If you can reproduce this bug on the latest stable Zed, please let us know by leaving a comment with the Zed version,
+ it helps us focus on the right issues.
If the bug doesn't appear for you anymore, feel free to close the issue yourself; otherwise, the bot will close it in a couple of weeks.
-
- Thanks for your help!
+ But even after it's closed by the bot, you can leave a comment with the version where the bug is reproducible and we'll reopen the issue.
+ Thanks!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please leave a comment with your Zed version so that we can reopen the issue."
- days-before-stale: 60
- days-before-close: 14
+ days-before-stale: 90
+ days-before-close: 21
only-issue-types: "Bug,Crash"
operations-per-run: ${{ inputs.operations-per-run || 2000 }}
ascending: true
@@ -26,13 +26,6 @@ jobs:
with:
cache: rust
path: ~/.rustup
- - name: steps::setup_sccache
- run: ./script/setup-sccache
- env:
- R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
- R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
- R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
- SCCACHE_BUCKET: sccache-zed
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
@@ -41,6 +34,13 @@ jobs:
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
+ - name: steps::setup_sccache
+ run: ./script/setup-sccache
+ env:
+ R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
+ R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
+ R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
+ SCCACHE_BUCKET: sccache-zed
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast
- name: steps::show_sccache_stats
@@ -73,13 +73,6 @@ jobs:
run: ./script/install-mold
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
- - name: steps::setup_sccache
- run: ./script/setup-sccache
- env:
- R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
- R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
- R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
- SCCACHE_BUCKET: sccache-zed
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
@@ -88,6 +81,13 @@ jobs:
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
+ - name: steps::setup_sccache
+ run: ./script/setup-sccache
+ env:
+ R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
+ R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
+ R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
+ SCCACHE_BUCKET: sccache-zed
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast
- name: steps::show_sccache_stats
@@ -118,6 +118,13 @@ jobs:
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
+ - name: steps::setup_node
+ uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
+ with:
+ node-version: '20'
+ - name: steps::clear_target_dir_if_large
+ run: ./script/clear-target-dir-if-larger-than.ps1 250
+ shell: pwsh
- name: steps::setup_sccache
run: ./script/setup-sccache.ps1
shell: pwsh
@@ -126,13 +133,6 @@ jobs:
R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
SCCACHE_BUCKET: sccache-zed
- - name: steps::setup_node
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
- with:
- node-version: '20'
- - name: steps::clear_target_dir_if_large
- run: ./script/clear-target-dir-if-larger-than.ps1 250
- shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast
shell: pwsh
@@ -38,6 +38,13 @@ jobs:
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
+ - name: steps::setup_node
+ uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
+ with:
+ node-version: '20'
+ - name: steps::clear_target_dir_if_large
+ run: ./script/clear-target-dir-if-larger-than.ps1 250
+ shell: pwsh
- name: steps::setup_sccache
run: ./script/setup-sccache.ps1
shell: pwsh
@@ -46,13 +53,6 @@ jobs:
R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
SCCACHE_BUCKET: sccache-zed
- - name: steps::setup_node
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
- with:
- node-version: '20'
- - name: steps::clear_target_dir_if_large
- run: ./script/clear-target-dir-if-larger-than.ps1 250
- shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast
shell: pwsh
@@ -39,6 +39,10 @@ jobs:
run: ./script/install-mold
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
+ - name: steps::cargo_install_nextest
+ uses: taiki-e/install-action@nextest
+ - name: steps::clear_target_dir_if_large
+ run: ./script/clear-target-dir-if-larger-than 250
- name: steps::setup_sccache
run: ./script/setup-sccache
env:
@@ -46,10 +50,6 @@ jobs:
R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
SCCACHE_BUCKET: sccache-zed
- - name: steps::cargo_install_nextest
- uses: taiki-e/install-action@nextest
- - name: steps::clear_target_dir_if_large
- run: ./script/clear-target-dir-if-larger-than 250
- name: ./script/run-unit-evals
run: ./script/run-unit-evals
env:
@@ -252,6 +252,13 @@ jobs:
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
+ - name: steps::setup_node
+ uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
+ with:
+ node-version: '20'
+ - name: steps::clear_target_dir_if_large
+ run: ./script/clear-target-dir-if-larger-than.ps1 250
+ shell: pwsh
- name: steps::setup_sccache
run: ./script/setup-sccache.ps1
shell: pwsh
@@ -260,13 +267,6 @@ jobs:
R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
SCCACHE_BUCKET: sccache-zed
- - name: steps::setup_node
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
- with:
- node-version: '20'
- - name: steps::clear_target_dir_if_large
- run: ./script/clear-target-dir-if-larger-than.ps1 250
- shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast${{ needs.orchestrate.outputs.changed_packages && format(' -E "{0}"', needs.orchestrate.outputs.changed_packages) || '' }}
shell: pwsh
@@ -304,13 +304,6 @@ jobs:
run: ./script/install-mold
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
- - name: steps::setup_sccache
- run: ./script/setup-sccache
- env:
- R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
- R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
- R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
- SCCACHE_BUCKET: sccache-zed
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
@@ -319,6 +312,13 @@ jobs:
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
+ - name: steps::setup_sccache
+ run: ./script/setup-sccache
+ env:
+ R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
+ R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
+ R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
+ SCCACHE_BUCKET: sccache-zed
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast${{ needs.orchestrate.outputs.changed_packages && format(' -E "{0}"', needs.orchestrate.outputs.changed_packages) || '' }}
- name: steps::show_sccache_stats
@@ -355,13 +355,6 @@ jobs:
with:
cache: rust
path: ~/.rustup
- - name: steps::setup_sccache
- run: ./script/setup-sccache
- env:
- R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
- R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
- R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
- SCCACHE_BUCKET: sccache-zed
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
@@ -370,6 +363,13 @@ jobs:
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
+ - name: steps::setup_sccache
+ run: ./script/setup-sccache
+ env:
+ R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
+ R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
+ R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
+ SCCACHE_BUCKET: sccache-zed
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast${{ needs.orchestrate.outputs.changed_packages && format(' -E "{0}"', needs.orchestrate.outputs.changed_packages) || '' }}
- name: steps::show_sccache_stats
@@ -42,6 +42,10 @@ jobs:
run: ./script/install-mold
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
+ - name: steps::cargo_install_nextest
+ uses: taiki-e/install-action@nextest
+ - name: steps::clear_target_dir_if_large
+ run: ./script/clear-target-dir-if-larger-than 250
- name: steps::setup_sccache
run: ./script/setup-sccache
env:
@@ -49,10 +53,6 @@ jobs:
R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
SCCACHE_BUCKET: sccache-zed
- - name: steps::cargo_install_nextest
- uses: taiki-e/install-action@nextest
- - name: steps::clear_target_dir_if_large
- run: ./script/clear-target-dir-if-larger-than 250
- name: ./script/run-unit-evals
run: ./script/run-unit-evals
env:
@@ -179,6 +179,7 @@ dependencies = [
"gpui",
"gpui_tokio",
"handlebars 4.5.0",
+ "heck 0.5.0",
"html_to_markdown",
"http_client",
"indoc",
@@ -432,32 +433,6 @@ dependencies = [
"zed_actions",
]
-[[package]]
-name = "agent_ui_v2"
-version = "0.1.0"
-dependencies = [
- "acp_thread",
- "agent",
- "agent-client-protocol",
- "agent_servers",
- "agent_settings",
- "agent_ui",
- "anyhow",
- "db",
- "feature_flags",
- "fs",
- "gpui",
- "log",
- "project",
- "prompt_store",
- "serde",
- "serde_json",
- "settings",
- "ui",
- "util",
- "workspace",
-]
-
[[package]]
name = "ahash"
version = "0.7.8"
@@ -779,17 +754,6 @@ dependencies = [
"libloading",
]
-[[package]]
-name = "ash-window"
-version = "0.13.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "52bca67b61cb81e5553babde81b8211f713cb6db79766f80168f3e5f40ea6c82"
-dependencies = [
- "ash",
- "raw-window-handle",
- "raw-window-metal",
-]
-
[[package]]
name = "ashpd"
version = "0.12.1"
@@ -2176,61 +2140,6 @@ dependencies = [
"wyz",
]
-[[package]]
-name = "blade-graphics"
-version = "0.7.0"
-source = "git+https://github.com/kvark/blade?rev=e3cf011ca18a6dfd907d1dedd93e85e21f005fe3#e3cf011ca18a6dfd907d1dedd93e85e21f005fe3"
-dependencies = [
- "ash",
- "ash-window",
- "bitflags 2.10.0",
- "bytemuck",
- "codespan-reporting 0.12.0",
- "glow",
- "gpu-alloc",
- "gpu-alloc-ash",
- "hidden-trait",
- "js-sys",
- "khronos-egl",
- "libloading",
- "log",
- "mint",
- "naga",
- "objc2",
- "objc2-app-kit",
- "objc2-core-foundation",
- "objc2-foundation",
- "objc2-metal",
- "objc2-quartz-core",
- "objc2-ui-kit",
- "once_cell",
- "raw-window-handle",
- "slab",
- "wasm-bindgen",
- "web-sys",
-]
-
-[[package]]
-name = "blade-macros"
-version = "0.3.0"
-source = "git+https://github.com/kvark/blade?rev=e3cf011ca18a6dfd907d1dedd93e85e21f005fe3#e3cf011ca18a6dfd907d1dedd93e85e21f005fe3"
-dependencies = [
- "proc-macro2",
- "quote",
- "syn 2.0.106",
-]
-
-[[package]]
-name = "blade-util"
-version = "0.3.0"
-source = "git+https://github.com/kvark/blade?rev=e3cf011ca18a6dfd907d1dedd93e85e21f005fe3#e3cf011ca18a6dfd907d1dedd93e85e21f005fe3"
-dependencies = [
- "blade-graphics",
- "bytemuck",
- "log",
- "profiling",
-]
-
[[package]]
name = "block"
version = "0.1.6"
@@ -3925,7 +3834,7 @@ dependencies = [
"core-graphics2",
"io-surface",
"libc",
- "metal",
+ "metal 0.29.0",
]
[[package]]
@@ -3978,7 +3887,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c5c9868e64aa6c5410629a83450e142c80e721c727a5bc0fb18107af6c2d66b"
dependencies = [
"bitflags 2.10.0",
- "fontdb",
+ "fontdb 0.23.0",
"harfrust",
"linebender_resource_handle",
"log",
@@ -5142,6 +5051,15 @@ dependencies = [
"zlog",
]
+[[package]]
+name = "document-features"
+version = "0.2.12"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d4b8a88685455ed29a21542a33abd9cb6510b6b129abadabdcef0f4c55bc8f61"
+dependencies = [
+ "litrs",
+]
+
[[package]]
name = "documented"
version = "0.9.2"
@@ -6453,6 +6371,20 @@ dependencies = [
"roxmltree",
]
+[[package]]
+name = "fontdb"
+version = "0.16.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b0299020c3ef3f60f526a4f64ab4a3d4ce116b1acbf24cdd22da0068e5d81dc3"
+dependencies = [
+ "fontconfig-parser",
+ "log",
+ "memmap2",
+ "slotmap",
+ "tinyvec",
+ "ttf-parser 0.20.0",
+]
+
[[package]]
name = "fontdb"
version = "0.23.0"
@@ -6464,7 +6396,7 @@ dependencies = [
"memmap2",
"slotmap",
"tinyvec",
- "ttf-parser",
+ "ttf-parser 0.25.1",
]
[[package]]
@@ -7117,7 +7049,7 @@ dependencies = [
"serde",
"serde_json",
"serde_yaml",
- "strum_macros 0.27.2",
+ "strum_macros",
]
[[package]]
@@ -7220,6 +7152,7 @@ dependencies = [
"git",
"git_ui",
"gpui",
+ "menu",
"project",
"rand 0.9.2",
"recent_projects",
@@ -7318,6 +7251,17 @@ dependencies = [
"ztracing",
]
+[[package]]
+name = "gl_generator"
+version = "0.14.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1a95dfc23a2b4a9a2f5ab41d194f8bfda3cabec42af4e39f08c339eb2a0c124d"
+dependencies = [
+ "khronos_api",
+ "log",
+ "xml-rs",
+]
+
[[package]]
name = "glob"
version = "0.3.3"
@@ -7361,6 +7305,15 @@ dependencies = [
"web-sys",
]
+[[package]]
+name = "glutin_wgl_sys"
+version = "0.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2c4ee00b289aba7a9e5306d57c2d05499b2e5dc427f84ac708bd2c090212cf3e"
+dependencies = [
+ "gl_generator",
+]
+
[[package]]
name = "go_to_line"
version = "0.1.0"
@@ -7410,31 +7363,35 @@ dependencies = [
]
[[package]]
-name = "gpu-alloc"
-version = "0.6.0"
+name = "gpu-allocator"
+version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "fbcd2dba93594b227a1f57ee09b8b9da8892c34d55aa332e034a228d0fe6a171"
+checksum = "51255ea7cfaadb6c5f1528d43e92a82acb2b96c43365989a28b2d44ee38f8795"
dependencies = [
- "bitflags 2.10.0",
- "gpu-alloc-types",
+ "ash",
+ "hashbrown 0.16.1",
+ "log",
+ "presser",
+ "thiserror 2.0.17",
+ "windows 0.61.3",
]
[[package]]
-name = "gpu-alloc-ash"
-version = "0.7.0"
+name = "gpu-descriptor"
+version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "cbda7a18a29bc98c2e0de0435c347df935bf59489935d0cbd0b73f1679b6f79a"
+checksum = "b89c83349105e3732062a895becfc71a8f921bb71ecbbdd8ff99263e3b53a0ca"
dependencies = [
- "ash",
- "gpu-alloc-types",
- "tinyvec",
+ "bitflags 2.10.0",
+ "gpu-descriptor-types",
+ "hashbrown 0.15.5",
]
[[package]]
-name = "gpu-alloc-types"
-version = "0.3.0"
+name = "gpu-descriptor-types"
+version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "98ff03b468aa837d70984d55f5d3f846f6ec31fe34bbb97c4f85219caeee1ca4"
+checksum = "fdf242682df893b86f33a73828fb09ca4b2d3bb6cc95249707fc684d27484b91"
dependencies = [
"bitflags 2.10.0",
]
@@ -7450,9 +7407,6 @@ dependencies = [
"backtrace",
"bindgen 0.71.1",
"bitflags 2.10.0",
- "blade-graphics",
- "blade-macros",
- "blade-util",
"block",
"bytemuck",
"calloop",
@@ -7487,7 +7441,7 @@ dependencies = [
"lyon",
"mach2 0.5.0",
"media",
- "metal",
+ "metal 0.29.0",
"naga",
"num_cpus",
"objc",
@@ -7535,9 +7489,10 @@ dependencies = [
"wayland-protocols",
"wayland-protocols-plasma",
"wayland-protocols-wlr",
+ "wgpu",
"windows 0.61.3",
"windows-core 0.61.2",
- "windows-numerics",
+ "windows-numerics 0.2.0",
"windows-registry 0.5.3",
"x11-clipboard",
"x11rb",
@@ -7844,17 +7799,6 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dfa686283ad6dd069f105e5ab091b04c62850d3e4cf5d67debad1933f55023df"
-[[package]]
-name = "hidden-trait"
-version = "0.1.2"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "68ed9e850438ac849bec07e7d09fbe9309cbd396a5988c30b010580ce08860df"
-dependencies = [
- "proc-macro2",
- "quote",
- "syn 1.0.109",
-]
-
[[package]]
name = "hkdf"
version = "0.12.4"
@@ -8414,7 +8358,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4b0f83760fb341a774ed326568e19f5a863af4a952def8c39f9ab92fd95b88e5"
dependencies = [
"equivalent",
- "hashbrown 0.16.1",
+ "hashbrown 0.15.5",
"serde",
"serde_core",
]
@@ -8780,6 +8724,17 @@ dependencies = [
"wasm-bindgen",
]
+[[package]]
+name = "json5"
+version = "0.4.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "96b0db21af676c1ce64250b5f40f3ce2cf27e4e47cb91ed91eb6fe9350b430c1"
+dependencies = [
+ "pest",
+ "pest_derive",
+ "serde",
+]
+
[[package]]
name = "json_dotpath"
version = "1.1.0"
@@ -8862,9 +8817,9 @@ dependencies = [
[[package]]
name = "jupyter-protocol"
-version = "1.1.1"
+version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "073486929b8271fc18bd001fb8604f4b4d88c0fae134b88ed943c46c8826d9eb"
+checksum = "5fecdcf39420574a8df6fa5758cecafa99a4af93a80ca2a9a96596f9b301e3a5"
dependencies = [
"async-trait",
"bytes 1.11.1",
@@ -8939,8 +8894,15 @@ checksum = "6aae1df220ece3c0ada96b8153459b67eebe9ae9212258bb0134ae60416fdf76"
dependencies = [
"libc",
"libloading",
+ "pkg-config",
]
+[[package]]
+name = "khronos_api"
+version = "3.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e2db585e1d738fc771bf08a151420d3ed193d9d895a36df7f6f8a9456b911ddc"
+
[[package]]
name = "kqueue"
version = "1.1.1"
@@ -9109,6 +9071,7 @@ dependencies = [
"aws-config",
"aws-credential-types",
"aws_http_client",
+ "base64 0.22.1",
"bedrock",
"chrono",
"client",
@@ -9516,6 +9479,12 @@ version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
+[[package]]
+name = "litrs"
+version = "1.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "11d3d7f243d5c5a8b9bb5d6dd2b1602c0cb0b9db1621bafc7ed66e35ff9fe092"
+
[[package]]
name = "livekit"
version = "0.7.8"
@@ -9899,6 +9868,7 @@ dependencies = [
"linkify",
"log",
"markup5ever_rcdom",
+ "mermaid-rs-renderer",
"pretty_assertions",
"pulldown-cmark 0.13.0",
"settings",
@@ -10045,7 +10015,7 @@ dependencies = [
"core-video",
"ctor",
"foreign-types 0.5.0",
- "metal",
+ "metal 0.29.0",
"objc",
]
@@ -10112,6 +10082,22 @@ dependencies = [
"syn 1.0.109",
]
+[[package]]
+name = "mermaid-rs-renderer"
+version = "0.2.0"
+source = "git+https://github.com/zed-industries/mermaid-rs-renderer?branch=fix-font-family-xml-escaping#d91961aa90bc7b0c09c87a13c91d48e2f05c468d"
+dependencies = [
+ "anyhow",
+ "fontdb 0.16.2",
+ "json5",
+ "once_cell",
+ "regex",
+ "serde",
+ "serde_json",
+ "thiserror 2.0.17",
+ "ttf-parser 0.20.0",
+]
+
[[package]]
name = "metal"
version = "0.29.0"
@@ -10127,6 +10113,21 @@ dependencies = [
"paste",
]
+[[package]]
+name = "metal"
+version = "0.33.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c7047791b5bc903b8cd963014b355f71dc9864a9a0b727057676c1dcae5cbc15"
+dependencies = [
+ "bitflags 2.10.0",
+ "block",
+ "core-graphics-types 0.2.0",
+ "foreign-types 0.5.0",
+ "log",
+ "objc",
+ "paste",
+]
+
[[package]]
name = "migrator"
version = "0.1.0"
@@ -10256,12 +10257,6 @@ dependencies = [
"simd-adler32",
]
-[[package]]
-name = "mint"
-version = "0.5.9"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e53debba6bda7a793e5f99b8dacf19e626084f525f7829104ba9898f367d85ff"
-
[[package]]
name = "mio"
version = "0.8.11"
@@ -10392,25 +10387,26 @@ checksum = "1d87ecb2933e8aeadb3e3a02b828fed80a7528047e68b4f424523a0981a3a084"
[[package]]
name = "naga"
-version = "25.0.1"
+version = "28.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "2b977c445f26e49757f9aca3631c3b8b836942cb278d69a92e7b80d3b24da632"
+checksum = "618f667225063219ddfc61251087db8a9aec3c3f0950c916b614e403486f1135"
dependencies = [
"arrayvec",
"bit-set",
"bitflags 2.10.0",
+ "cfg-if",
"cfg_aliases 0.2.1",
"codespan-reporting 0.12.0",
"half",
- "hashbrown 0.15.5",
+ "hashbrown 0.16.1",
"hexf-parse",
"indexmap",
+ "libm",
"log",
"num-traits",
"once_cell",
"rustc-hash 1.1.0",
"spirv",
- "strum 0.26.3",
"thiserror 2.0.17",
"unicode-ident",
]
@@ -10915,19 +10911,6 @@ dependencies = [
"objc2-encode",
]
-[[package]]
-name = "objc2-app-kit"
-version = "0.3.1"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e6f29f568bec459b0ddff777cec4fe3fd8666d82d5a40ebd0ff7e66134f89bcc"
-dependencies = [
- "bitflags 2.10.0",
- "objc2",
- "objc2-core-foundation",
- "objc2-foundation",
- "objc2-quartz-core",
-]
-
[[package]]
name = "objc2-audio-toolbox"
version = "0.3.1"
@@ -11032,32 +11015,6 @@ dependencies = [
"objc2-foundation",
]
-[[package]]
-name = "objc2-quartz-core"
-version = "0.3.1"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "90ffb6a0cd5f182dc964334388560b12a57f7b74b3e2dec5e2722aa2dfb2ccd5"
-dependencies = [
- "bitflags 2.10.0",
- "objc2",
- "objc2-core-foundation",
- "objc2-foundation",
- "objc2-metal",
-]
-
-[[package]]
-name = "objc2-ui-kit"
-version = "0.3.1"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "25b1312ad7bc8a0e92adae17aa10f90aae1fb618832f9b993b022b591027daed"
-dependencies = [
- "bitflags 2.10.0",
- "objc2",
- "objc2-core-foundation",
- "objc2-foundation",
- "objc2-quartz-core",
-]
-
[[package]]
name = "objc_exception"
version = "0.1.2"
@@ -12593,6 +12550,12 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "925383efa346730478fb4838dbe9137d2a47675ad789c546d150a6e1dd4ab31c"
+[[package]]
+name = "presser"
+version = "0.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e8cf8e6a8aa66ce33f63993ffc4ea4271eb5b0530a9002db8455ea6050c77bfa"
+
[[package]]
name = "prettier"
version = "0.1.0"
@@ -13381,6 +13344,12 @@ dependencies = [
"rand 0.9.2",
]
+[[package]]
+name = "range-alloc"
+version = "0.1.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c3d6831663a5098ea164f89cff59c6284e95f4e3c76ce9848d4529f5ccca9bde"
+
[[package]]
name = "range-map"
version = "0.2.0"
@@ -13470,18 +13439,6 @@ version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "20675572f6f24e9e76ef639bc5552774ed45f1c30e2951e1e99c59888861c539"
-[[package]]
-name = "raw-window-metal"
-version = "0.4.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "76e8caa82e31bb98fee12fa8f051c94a6aa36b07cddb03f0d4fc558988360ff1"
-dependencies = [
- "cocoa 0.25.0",
- "core-graphics 0.23.2",
- "objc",
- "raw-window-handle",
-]
-
[[package]]
name = "rayon"
version = "1.11.0"
@@ -13860,6 +13817,12 @@ dependencies = [
"bytecheck",
]
+[[package]]
+name = "renderdoc-sys"
+version = "1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "19b30a45b0cd0bcca8037f3d0dc3421eaf95327a17cad11964fb8179b4fc4832"
+
[[package]]
name = "repl"
version = "0.1.0"
@@ -14261,9 +14224,9 @@ dependencies = [
[[package]]
name = "runtimelib"
-version = "1.1.0"
+version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "25a8031614aa3913648d167bc69e2b9fda7731f2226ef588b50323c392bfeb58"
+checksum = "d80685459e1e5fa5603182058351ae91c98ca458dfef4e85f0a37be4f7cf1e6c"
dependencies = [
"async-dispatcher",
"async-std",
@@ -14580,7 +14543,7 @@ dependencies = [
"core_maths",
"log",
"smallvec",
- "ttf-parser",
+ "ttf-parser 0.25.1",
"unicode-bidi-mirroring",
"unicode-ccc",
"unicode-properties",
@@ -16143,9 +16106,6 @@ name = "strum"
version = "0.26.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8fec0f0aef304996cf250b31b5a10dee7980c85da9d759361292b8bca5a18f06"
-dependencies = [
- "strum_macros 0.26.4",
-]
[[package]]
name = "strum"
@@ -16153,20 +16113,7 @@ version = "0.27.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af23d6f6c1a224baef9d3f61e287d2761385a5b88fdab4eb4c6f11aeb54c4bcf"
dependencies = [
- "strum_macros 0.27.2",
-]
-
-[[package]]
-name = "strum_macros"
-version = "0.26.4"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "4c6bee85a5a24955dc440386795aa378cd9cf82acd5f764469152d2270e581be"
-dependencies = [
- "heck 0.5.0",
- "proc-macro2",
- "quote",
- "rustversion",
- "syn 2.0.106",
+ "strum_macros",
]
[[package]]
@@ -18056,6 +18003,12 @@ version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
+[[package]]
+name = "ttf-parser"
+version = "0.20.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "17f77d76d837a7830fe1d4f12b7b4ba4192c1888001c7164257e4bc6d21d96b4"
+
[[package]]
name = "ttf-parser"
version = "0.25.1"
@@ -18406,7 +18359,7 @@ dependencies = [
"base64 0.22.1",
"data-url",
"flate2",
- "fontdb",
+ "fontdb 0.23.0",
"imagesize",
"kurbo",
"log",
@@ -19534,6 +19487,156 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a751b3277700db47d3e574514de2eced5e54dc8a5436a3bf7a0b248b2cee16f3"
+[[package]]
+name = "wgpu"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f9cb534d5ffd109c7d1135f34cdae29e60eab94855a625dcfe1705f8bc7ad79f"
+dependencies = [
+ "arrayvec",
+ "bitflags 2.10.0",
+ "bytemuck",
+ "cfg-if",
+ "cfg_aliases 0.2.1",
+ "document-features",
+ "hashbrown 0.16.1",
+ "js-sys",
+ "log",
+ "naga",
+ "parking_lot",
+ "portable-atomic",
+ "profiling",
+ "raw-window-handle",
+ "smallvec",
+ "static_assertions",
+ "wasm-bindgen",
+ "wasm-bindgen-futures",
+ "web-sys",
+ "wgpu-core",
+ "wgpu-hal",
+ "wgpu-types",
+]
+
+[[package]]
+name = "wgpu-core"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8bb4c8b5db5f00e56f1f08869d870a0dff7c8bc7ebc01091fec140b0cf0211a9"
+dependencies = [
+ "arrayvec",
+ "bit-set",
+ "bit-vec",
+ "bitflags 2.10.0",
+ "bytemuck",
+ "cfg_aliases 0.2.1",
+ "document-features",
+ "hashbrown 0.16.1",
+ "indexmap",
+ "log",
+ "naga",
+ "once_cell",
+ "parking_lot",
+ "portable-atomic",
+ "profiling",
+ "raw-window-handle",
+ "rustc-hash 1.1.0",
+ "smallvec",
+ "thiserror 2.0.17",
+ "wgpu-core-deps-apple",
+ "wgpu-core-deps-emscripten",
+ "wgpu-core-deps-windows-linux-android",
+ "wgpu-hal",
+ "wgpu-types",
+]
+
+[[package]]
+name = "wgpu-core-deps-apple"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "87b7b696b918f337c486bf93142454080a32a37832ba8a31e4f48221890047da"
+dependencies = [
+ "wgpu-hal",
+]
+
+[[package]]
+name = "wgpu-core-deps-emscripten"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "34b251c331f84feac147de3c4aa3aa45112622a95dd7ee1b74384fa0458dbd79"
+dependencies = [
+ "wgpu-hal",
+]
+
+[[package]]
+name = "wgpu-core-deps-windows-linux-android"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "68ca976e72b2c9964eb243e281f6ce7f14a514e409920920dcda12ae40febaae"
+dependencies = [
+ "wgpu-hal",
+]
+
+[[package]]
+name = "wgpu-hal"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "293080d77fdd14d6b08a67c5487dfddbf874534bb7921526db56a7b75d7e3bef"
+dependencies = [
+ "android_system_properties",
+ "arrayvec",
+ "ash",
+ "bit-set",
+ "bitflags 2.10.0",
+ "block",
+ "bytemuck",
+ "cfg-if",
+ "cfg_aliases 0.2.1",
+ "core-graphics-types 0.2.0",
+ "glow",
+ "glutin_wgl_sys",
+ "gpu-allocator",
+ "gpu-descriptor",
+ "hashbrown 0.16.1",
+ "js-sys",
+ "khronos-egl",
+ "libc",
+ "libloading",
+ "log",
+ "metal 0.33.0",
+ "naga",
+ "ndk-sys",
+ "objc",
+ "once_cell",
+ "ordered-float 4.6.0",
+ "parking_lot",
+ "portable-atomic",
+ "portable-atomic-util",
+ "profiling",
+ "range-alloc",
+ "raw-window-handle",
+ "renderdoc-sys",
+ "smallvec",
+ "thiserror 2.0.17",
+ "wasm-bindgen",
+ "web-sys",
+ "wgpu-types",
+ "windows 0.62.2",
+ "windows-core 0.62.2",
+]
+
+[[package]]
+name = "wgpu-types"
+version = "28.0.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e18308757e594ed2cd27dddbb16a139c42a683819d32a2e0b1b0167552f5840c"
+dependencies = [
+ "bitflags 2.10.0",
+ "bytemuck",
+ "js-sys",
+ "log",
+ "web-sys",
+]
+
[[package]]
name = "which"
version = "4.4.2"
@@ -19699,11 +19802,23 @@ version = "0.61.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9babd3a767a4c1aef6900409f85f5d53ce2544ccdfaa86dad48c91782c6d6893"
dependencies = [
- "windows-collections",
+ "windows-collections 0.2.0",
"windows-core 0.61.2",
- "windows-future",
+ "windows-future 0.2.1",
"windows-link 0.1.3",
- "windows-numerics",
+ "windows-numerics 0.2.0",
+]
+
+[[package]]
+name = "windows"
+version = "0.62.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "527fadee13e0c05939a6a05d5bd6eec6cd2e3dbd648b9f8e447c6518133d8580"
+dependencies = [
+ "windows-collections 0.3.2",
+ "windows-core 0.62.2",
+ "windows-future 0.3.2",
+ "windows-numerics 0.3.1",
]
[[package]]
@@ -19717,7 +19832,7 @@ dependencies = [
"rayon",
"thiserror 2.0.17",
"windows 0.61.3",
- "windows-future",
+ "windows-future 0.2.1",
]
[[package]]
@@ -19729,6 +19844,15 @@ dependencies = [
"windows-core 0.61.2",
]
+[[package]]
+name = "windows-collections"
+version = "0.3.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "23b2d95af1a8a14a3c7367e1ed4fc9c20e0a26e79551b1454d72583c97cc6610"
+dependencies = [
+ "windows-core 0.62.2",
+]
+
[[package]]
name = "windows-core"
version = "0.57.0"
@@ -19788,7 +19912,18 @@ checksum = "fc6a41e98427b19fe4b73c550f060b59fa592d7d686537eebf9385621bfbad8e"
dependencies = [
"windows-core 0.61.2",
"windows-link 0.1.3",
- "windows-threading",
+ "windows-threading 0.1.0",
+]
+
+[[package]]
+name = "windows-future"
+version = "0.3.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e1d6f90251fe18a279739e78025bd6ddc52a7e22f921070ccdc67dde84c605cb"
+dependencies = [
+ "windows-core 0.62.2",
+ "windows-link 0.2.1",
+ "windows-threading 0.2.1",
]
[[package]]
@@ -19879,6 +20014,16 @@ dependencies = [
"windows-link 0.1.3",
]
+[[package]]
+name = "windows-numerics"
+version = "0.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6e2e40844ac143cdb44aead537bbf727de9b044e107a0f1220392177d15b0f26"
+dependencies = [
+ "windows-core 0.62.2",
+ "windows-link 0.2.1",
+]
+
[[package]]
name = "windows-registry"
version = "0.4.0"
@@ -20111,6 +20256,15 @@ dependencies = [
"windows-link 0.1.3",
]
+[[package]]
+name = "windows-threading"
+version = "0.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3949bd5b99cafdf1c7ca86b43ca564028dfe27d66958f2470940f73d86d75b37"
+dependencies = [
+ "windows-link 0.2.1",
+]
+
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.42.2"
@@ -20819,6 +20973,12 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9cc00251562a284751c9973bace760d86c0276c471b4be569fe6b068ee97a56"
+[[package]]
+name = "xml-rs"
+version = "0.8.28"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3ae8337f8a065cfc972643663ea4279e04e7256de865aa66fe25cec5fb912d3f"
+
[[package]]
name = "xml5ever"
version = "0.18.1"
@@ -21045,7 +21205,6 @@ dependencies = [
"agent_servers",
"agent_settings",
"agent_ui",
- "agent_ui_v2",
"anyhow",
"ashpd",
"askpass",
@@ -9,7 +9,6 @@ members = [
"crates/agent_servers",
"crates/agent_settings",
"crates/agent_ui",
- "crates/agent_ui_v2",
"crates/ai_onboarding",
"crates/anthropic",
"crates/askpass",
@@ -255,7 +254,6 @@ action_log = { path = "crates/action_log" }
agent = { path = "crates/agent" }
activity_indicator = { path = "crates/activity_indicator" }
agent_ui = { path = "crates/agent_ui" }
-agent_ui_v2 = { path = "crates/agent_ui_v2" }
agent_settings = { path = "crates/agent_settings" }
agent_servers = { path = "crates/agent_servers" }
ai_onboarding = { path = "crates/ai_onboarding" }
@@ -286,7 +284,7 @@ collections = { path = "crates/collections", version = "0.1.0" }
command_palette = { path = "crates/command_palette" }
command_palette_hooks = { path = "crates/command_palette_hooks" }
component = { path = "crates/component" }
-component_preview = { path = "crates/component_preview" }
+component_preview = { path = "crates/component_preview" }
context_server = { path = "crates/context_server" }
copilot = { path = "crates/copilot" }
copilot_chat = { path = "crates/copilot_chat" }
@@ -356,6 +354,7 @@ markdown_preview = { path = "crates/markdown_preview" }
svg_preview = { path = "crates/svg_preview" }
media = { path = "crates/media" }
menu = { path = "crates/menu" }
+mermaid-rs-renderer = { git = "https://github.com/zed-industries/mermaid-rs-renderer", branch = "fix-font-family-xml-escaping", default-features = false }
migrator = { path = "crates/migrator" }
mistral = { path = "crates/mistral" }
multi_buffer = { path = "crates/multi_buffer" }
@@ -468,7 +467,9 @@ alacritty_terminal = { git = "https://github.com/zed-industries/alacritty", rev
any_vec = "0.14"
anyhow = "1.0.86"
arrayvec = { version = "0.7.4", features = ["serde"] }
-ashpd = { version = "0.12.1", default-features = false, features = ["async-std"] }
+ashpd = { version = "0.12.1", default-features = false, features = [
+ "async-std",
+] }
async-compat = "0.2.1"
async-compression = { version = "0.4", features = ["gzip", "futures-io"] }
async-dispatcher = "0.1"
@@ -494,9 +495,6 @@ backtrace = "0.3"
base64 = "0.22"
bincode = "1.2.1"
bitflags = "2.6.0"
-blade-graphics = { git = "https://github.com/kvark/blade", rev = "e3cf011ca18a6dfd907d1dedd93e85e21f005fe3" }
-blade-macros = { git = "https://github.com/kvark/blade", rev = "e3cf011ca18a6dfd907d1dedd93e85e21f005fe3" }
-blade-util = { git = "https://github.com/kvark/blade", rev = "e3cf011ca18a6dfd907d1dedd93e85e21f005fe3" }
brotli = "8.0.2"
bytes = "1.0"
cargo_metadata = "0.19"
@@ -555,7 +553,7 @@ itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.37.0"
jsonwebtoken = "10.0"
-jupyter-protocol = "1.1.1"
+jupyter-protocol = "1.2.0"
jupyter-websocket-client = "1.0.0"
libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
@@ -567,7 +565,7 @@ markup5ever_rcdom = "0.3.0"
metal = "0.29"
minidumper = "0.8"
moka = { version = "0.12.10", features = ["sync"] }
-naga = { version = "25.0", features = ["wgsl-in"] }
+naga = { version = "28.0", features = ["wgsl-in"] }
nanoid = "0.4"
nbformat = "1.0.0"
nix = "0.29"
@@ -596,7 +594,7 @@ objc2-foundation = { version = "=0.3.1", default-features = false, features = [
"NSUndoManager",
"NSValue",
"objc2-core-foundation",
- "std"
+ "std",
] }
open = "5.0.0"
ordered-float = "2.1.1"
@@ -638,7 +636,7 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "c15662
"stream",
], package = "zed-reqwest", version = "0.12.15-zed" }
rsa = "0.9.6"
-runtimelib = { version = "1.1.0", default-features = false, features = [
+runtimelib = { version = "1.2.0", default-features = false, features = [
"async-dispatcher-runtime", "aws-lc-rs"
] }
rust-embed = { version = "8.4", features = ["include-exclude"] }
@@ -689,9 +687,16 @@ time = { version = "0.3", features = [
tiny_http = "0.8"
tokio = { version = "1" }
tokio-tungstenite = { version = "0.26", features = ["__rustls-tls"] }
-tokio-socks = { version = "0.5.2", default-features = false, features = ["futures-io", "tokio"] }
+tokio-socks = { version = "0.5.2", default-features = false, features = [
+ "futures-io",
+ "tokio",
+] }
toml = "0.8"
-toml_edit = { version = "0.22", default-features = false, features = ["display", "parse", "serde"] }
+toml_edit = { version = "0.22", default-features = false, features = [
+ "display",
+ "parse",
+ "serde",
+] }
tower-http = "0.4.4"
tree-sitter = { version = "0.26", features = ["wasm"] }
tree-sitter-bash = "0.25.1"
@@ -740,6 +745,7 @@ wasmtime = { version = "33", default-features = false, features = [
wasmtime-wasi = "33"
wax = "0.7"
which = "6.0.0"
+wgpu = "28.0"
windows-core = "0.61"
yawc = "0.2.5"
zeroize = "1.8"
@@ -0,0 +1,4 @@
+<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
+<path d="M10.0001 6.66669L13.3334 10L10.0001 13.3334" stroke="#C6CAD0" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
+<path d="M2.66675 5V7.33335C2.66675 8.0406 2.94765 8.71888 3.44775 9.21897C3.94785 9.71907 4.62615 10 5.33345 10H13.3334" stroke="#C6CAD0" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
+</svg>
@@ -931,8 +931,6 @@
"button": true,
// Where to dock the agent panel. Can be 'left', 'right' or 'bottom'.
"dock": "right",
- // Where to dock the agents panel. Can be 'left' or 'right'.
- "agents_panel_dock": "left",
// Default width when the agent panel is docked to the left or right.
"default_width": 640,
// Default height when the agent panel is docked to the bottom.
@@ -727,6 +727,14 @@ mod test_support {
}
}
+ fn set_title(
+ &self,
+ _session_id: &acp::SessionId,
+ _cx: &App,
+ ) -> Option<Rc<dyn AgentSessionSetTitle>> {
+ Some(Rc::new(StubAgentSessionSetTitle))
+ }
+
fn truncate(
&self,
_session_id: &agent_client_protocol::SessionId,
@@ -740,6 +748,14 @@ mod test_support {
}
}
+ struct StubAgentSessionSetTitle;
+
+ impl AgentSessionSetTitle for StubAgentSessionSetTitle {
+ fn run(&self, _title: SharedString, _cx: &mut App) -> Task<Result<()>> {
+ Task::ready(Ok(()))
+ }
+ }
+
struct StubAgentSessionEditor;
impl AgentSessionTruncate for StubAgentSessionEditor {
@@ -38,6 +38,7 @@ futures.workspace = true
git.workspace = true
gpui.workspace = true
handlebars = { workspace = true, features = ["rust-embed"] }
+heck.workspace = true
html_to_markdown.workspace = true
http_client.workspace = true
indoc.workspace = true
@@ -1395,12 +1395,19 @@ impl acp_thread::AgentConnection for NativeAgentConnection {
fn set_title(
&self,
session_id: &acp::SessionId,
- _cx: &App,
+ cx: &App,
) -> Option<Rc<dyn acp_thread::AgentSessionSetTitle>> {
- Some(Rc::new(NativeAgentSessionSetTitle {
- connection: self.clone(),
- session_id: session_id.clone(),
- }) as _)
+ self.0.read_with(cx, |agent, _cx| {
+ agent
+ .sessions
+ .get(session_id)
+ .filter(|s| !s.thread.read(cx).is_subagent())
+ .map(|session| {
+ Rc::new(NativeAgentSessionSetTitle {
+ thread: session.thread.clone(),
+ }) as _
+ })
+ })
}
fn session_list(&self, cx: &mut App) -> Option<Rc<dyn AgentSessionList>> {
@@ -1559,17 +1566,13 @@ impl acp_thread::AgentSessionRetry for NativeAgentSessionRetry {
}
struct NativeAgentSessionSetTitle {
- connection: NativeAgentConnection,
- session_id: acp::SessionId,
+ thread: Entity<Thread>,
}
impl acp_thread::AgentSessionSetTitle for NativeAgentSessionSetTitle {
fn run(&self, title: SharedString, cx: &mut App) -> Task<Result<()>> {
- let Some(session) = self.connection.0.read(cx).sessions.get(&self.session_id) else {
- return Task::ready(Err(anyhow!("session not found")));
- };
- let thread = session.thread.clone();
- thread.update(cx, |thread, cx| thread.set_title(title, cx));
+ self.thread
+ .update(cx, |thread, cx| thread.set_title(title, cx));
Task::ready(Ok(()))
}
}
@@ -88,7 +88,6 @@ fn eval_extract_handle_command_output() {
// claude-sonnet-4 | 0.97 (2025-06-14)
// gemini-2.5-pro-06-05 | 0.98 (2025-06-16)
// gemini-2.5-flash | 0.11 (2025-05-22)
- // gpt-4.1 | 1.00 (2025-05-22)
let input_file_path = "root/blame.rs";
let input_file_content = include_str!("evals/fixtures/extract_handle_command_output/before.rs");
@@ -164,7 +163,6 @@ fn eval_delete_run_git_blame() {
// claude-sonnet-4 | 0.96 (2025-06-14)
// gemini-2.5-pro-06-05 | 1.0 (2025-06-16)
// gemini-2.5-flash |
- // gpt-4.1 |
let input_file_path = "root/blame.rs";
let input_file_content = include_str!("evals/fixtures/delete_run_git_blame/before.rs");
@@ -230,7 +228,6 @@ fn eval_translate_doc_comments() {
// claude-sonnet-4 | 1.0 (2025-06-14)
// gemini-2.5-pro-preview-03-25 | 1.0 (2025-05-22)
// gemini-2.5-flash-preview-04-17 |
- // gpt-4.1 |
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/translate_doc_comments/before.rs");
@@ -295,7 +292,6 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
// claude-sonnet-4 | 0.11 (2025-06-14)
// gemini-2.5-pro-preview-latest | 0.99 (2025-06-16)
// gemini-2.5-flash-preview-04-17 |
- // gpt-4.1 |
let input_file_path = "root/lib.rs";
let input_file_content =
@@ -419,7 +415,6 @@ fn eval_disable_cursor_blinking() {
// claude-sonnet-4 | 0.81 (2025-07-14)
// gemini-2.5-pro | 0.95 (2025-07-14)
// gemini-2.5-flash-preview-04-17 | 0.78 (2025-07-14)
- // gpt-4.1 | 0.00 (2025-07-14) (follows edit_description too literally)
let input_file_path = "root/editor.rs";
let input_file_content = include_str!("evals/fixtures/disable_cursor_blinking/before.rs");
@@ -509,7 +504,6 @@ fn eval_from_pixels_constructor() {
// claude-4.0-sonnet | 2025-06-14 | 0.99
// claude-3.7-sonnet | 2025-06-14 | 0.88
// gemini-2.5-pro-preview-06-05 | 2025-06-16 | 0.98
- // gpt-4.1 |
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/from_pixels_constructor/before.rs");
@@ -718,7 +712,6 @@ fn eval_zode() {
// claude-sonnet-4 | 1.0 (2025-06-14)
// gemini-2.5-pro-preview-03-25 | 1.0 (2025-05-22)
// gemini-2.5-flash-preview-04-17 | 1.0 (2025-05-22)
- // gpt-4.1 | 1.0 (2025-05-22)
let input_file_path = "root/zode.py";
let input_content = None;
@@ -823,7 +816,6 @@ fn eval_add_overwrite_test() {
// claude-sonnet-4 | 0.07 (2025-06-14)
// gemini-2.5-pro-preview-03-25 | 0.35 (2025-05-22)
// gemini-2.5-flash-preview-04-17 |
- // gpt-4.1 |
let input_file_path = "root/action_log.rs";
let input_file_content = include_str!("evals/fixtures/add_overwrite_test/before.rs");
@@ -1057,11 +1049,6 @@ fn eval_create_empty_file() {
// claude-sonnet-4 | 1.00 (2025-06-14)
// gemini-2.5-pro-preview-03-25 | 1.00 (2025-05-21)
// gemini-2.5-flash-preview-04-17 | 1.00 (2025-05-21)
- // gpt-4.1 | 1.00 (2025-05-21)
- //
- //
- // TODO: gpt-4.1-mini errored 38 times:
- // "data did not match any variant of untagged enum ResponseStreamResult"
let input_file_content = None;
let expected_output_content = String::new();
@@ -1446,6 +1446,188 @@ async fn test_mcp_tools(cx: &mut TestAppContext) {
events.collect::<Vec<_>>().await;
}
+#[gpui::test]
+async fn test_mcp_tool_result_displayed_when_server_disconnected(cx: &mut TestAppContext) {
+ let ThreadTest {
+ model,
+ thread,
+ context_server_store,
+ fs,
+ ..
+ } = setup(cx, TestModel::Fake).await;
+ let fake_model = model.as_fake();
+
+ // Setup settings to allow MCP tools
+ fs.insert_file(
+ paths::settings_file(),
+ json!({
+ "agent": {
+ "always_allow_tool_actions": true,
+ "profiles": {
+ "test": {
+ "name": "Test Profile",
+ "enable_all_context_servers": true,
+ "tools": {}
+ },
+ }
+ }
+ })
+ .to_string()
+ .into_bytes(),
+ )
+ .await;
+ cx.run_until_parked();
+ thread.update(cx, |thread, cx| {
+ thread.set_profile(AgentProfileId("test".into()), cx)
+ });
+
+ // Setup a context server with a tool
+ let mut mcp_tool_calls = setup_context_server(
+ "github_server",
+ vec![context_server::types::Tool {
+ name: "issue_read".into(),
+ description: Some("Read a GitHub issue".into()),
+ input_schema: json!({
+ "type": "object",
+ "properties": {
+ "issue_url": { "type": "string" }
+ }
+ }),
+ output_schema: None,
+ annotations: None,
+ }],
+ &context_server_store,
+ cx,
+ );
+
+ // Send a message and have the model call the MCP tool
+ let events = thread.update(cx, |thread, cx| {
+ thread
+ .send(UserMessageId::new(), ["Read issue #47404"], cx)
+ .unwrap()
+ });
+ cx.run_until_parked();
+
+ // Verify the MCP tool is available to the model
+ let completion = fake_model.pending_completions().pop().unwrap();
+ assert_eq!(
+ tool_names_for_completion(&completion),
+ vec!["issue_read"],
+ "MCP tool should be available"
+ );
+
+ // Simulate the model calling the MCP tool
+ fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(
+ LanguageModelToolUse {
+ id: "tool_1".into(),
+ name: "issue_read".into(),
+ raw_input: json!({"issue_url": "https://github.com/zed-industries/zed/issues/47404"})
+ .to_string(),
+ input: json!({"issue_url": "https://github.com/zed-industries/zed/issues/47404"}),
+ is_input_complete: true,
+ thought_signature: None,
+ },
+ ));
+ fake_model.end_last_completion_stream();
+ cx.run_until_parked();
+
+ // The MCP server receives the tool call and responds with content
+ let expected_tool_output = "Issue #47404: Tool call results are cleared upon app close";
+ let (tool_call_params, tool_call_response) = mcp_tool_calls.next().await.unwrap();
+ assert_eq!(tool_call_params.name, "issue_read");
+ tool_call_response
+ .send(context_server::types::CallToolResponse {
+ content: vec![context_server::types::ToolResponseContent::Text {
+ text: expected_tool_output.into(),
+ }],
+ is_error: None,
+ meta: None,
+ structured_content: None,
+ })
+ .unwrap();
+ cx.run_until_parked();
+
+ // After tool completes, the model continues with a new completion request
+ // that includes the tool results. We need to respond to this.
+ let _completion = fake_model.pending_completions().pop().unwrap();
+ fake_model.send_last_completion_stream_text_chunk("I found the issue!");
+ fake_model
+ .send_last_completion_stream_event(LanguageModelCompletionEvent::Stop(StopReason::EndTurn));
+ fake_model.end_last_completion_stream();
+ events.collect::<Vec<_>>().await;
+
+ // Verify the tool result is stored in the thread by checking the markdown output.
+ // The tool result is in the first assistant message (not the last one, which is
+ // the model's response after the tool completed).
+ thread.update(cx, |thread, _cx| {
+ let markdown = thread.to_markdown();
+ assert!(
+ markdown.contains("**Tool Result**: issue_read"),
+ "Thread should contain tool result header"
+ );
+ assert!(
+ markdown.contains(expected_tool_output),
+ "Thread should contain tool output: {}",
+ expected_tool_output
+ );
+ });
+
+ // Simulate app restart: disconnect the MCP server.
+ // After restart, the MCP server won't be connected yet when the thread is replayed.
+ context_server_store.update(cx, |store, cx| {
+ let _ = store.stop_server(&ContextServerId("github_server".into()), cx);
+ });
+ cx.run_until_parked();
+
+ // Replay the thread (this is what happens when loading a saved thread)
+ let mut replay_events = thread.update(cx, |thread, cx| thread.replay(cx));
+
+ let mut found_tool_call = None;
+ let mut found_tool_call_update_with_output = None;
+
+ while let Some(event) = replay_events.next().await {
+ let event = event.unwrap();
+ match &event {
+ ThreadEvent::ToolCall(tc) if tc.tool_call_id.to_string() == "tool_1" => {
+ found_tool_call = Some(tc.clone());
+ }
+ ThreadEvent::ToolCallUpdate(acp_thread::ToolCallUpdate::UpdateFields(update))
+ if update.tool_call_id.to_string() == "tool_1" =>
+ {
+ if update.fields.raw_output.is_some() {
+ found_tool_call_update_with_output = Some(update.clone());
+ }
+ }
+ _ => {}
+ }
+ }
+
+ // The tool call should be found
+ assert!(
+ found_tool_call.is_some(),
+ "Tool call should be emitted during replay"
+ );
+
+ assert!(
+ found_tool_call_update_with_output.is_some(),
+ "ToolCallUpdate with raw_output should be emitted even when MCP server is disconnected."
+ );
+
+ let update = found_tool_call_update_with_output.unwrap();
+ assert_eq!(
+ update.fields.raw_output,
+ Some(expected_tool_output.into()),
+ "raw_output should contain the saved tool result"
+ );
+
+ // Also verify the status is correct (completed, not failed)
+ assert_eq!(
+ update.fields.status,
+ Some(acp::ToolCallStatus::Completed),
+ "Tool call status should reflect the original completion status"
+ );
+}
+
#[gpui::test]
async fn test_mcp_tool_truncation(cx: &mut TestAppContext) {
let ThreadTest {
@@ -1585,6 +1767,23 @@ async fn test_mcp_tool_truncation(cx: &mut TestAppContext) {
cx,
);
+ // Server with spaces in name - tests snake_case conversion for API compatibility
+ let _server4_calls = setup_context_server(
+ "Azure DevOps",
+ vec![context_server::types::Tool {
+ name: "echo".into(), // Also conflicts - will be disambiguated as azure_dev_ops_echo
+ description: None,
+ input_schema: serde_json::to_value(EchoTool::input_schema(
+ LanguageModelToolSchemaFormat::JsonSchema,
+ ))
+ .unwrap(),
+ output_schema: None,
+ annotations: None,
+ }],
+ &context_server_store,
+ cx,
+ );
+
thread
.update(cx, |thread, cx| {
thread.send(UserMessageId::new(), ["Go"], cx)
@@ -1595,6 +1794,7 @@ async fn test_mcp_tool_truncation(cx: &mut TestAppContext) {
assert_eq!(
tool_names_for_completion(&completion),
vec![
+ "azure_dev_ops_echo",
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
"cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc",
"delay",
@@ -32,6 +32,7 @@ use futures::{
use gpui::{
App, AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task, WeakEntity,
};
+use heck::ToSnakeCase as _;
use language_model::{
LanguageModel, LanguageModelCompletionError, LanguageModelCompletionEvent, LanguageModelId,
LanguageModelImage, LanguageModelProviderId, LanguageModelRegistry, LanguageModelRequest,
@@ -982,6 +983,20 @@ impl Thread {
stream: &ThreadEventStream,
cx: &mut Context<Self>,
) {
+ // Extract saved output and status first, so they're available even if tool is not found
+ let output = tool_result
+ .as_ref()
+ .and_then(|result| result.output.clone());
+ let status = tool_result
+ .as_ref()
+ .map_or(acp::ToolCallStatus::Failed, |result| {
+ if result.is_error {
+ acp::ToolCallStatus::Failed
+ } else {
+ acp::ToolCallStatus::Completed
+ }
+ });
+
let tool = self.tools.get(tool_use.name.as_ref()).cloned().or_else(|| {
self.context_server_registry
.read(cx)
@@ -996,14 +1011,25 @@ impl Thread {
});
let Some(tool) = tool else {
+ // Tool not found (e.g., MCP server not connected after restart),
+ // but still display the saved result if available.
+ // We need to send both ToolCall and ToolCallUpdate events because the UI
+ // only converts raw_output to displayable content in update_fields, not from_acp.
stream
.0
.unbounded_send(Ok(ThreadEvent::ToolCall(
acp::ToolCall::new(tool_use.id.to_string(), tool_use.name.to_string())
- .status(acp::ToolCallStatus::Failed)
+ .status(status)
.raw_input(tool_use.input.clone()),
)))
.ok();
+ stream.update_tool_call_fields(
+ &tool_use.id,
+ acp::ToolCallUpdateFields::new()
+ .status(status)
+ .raw_output(output),
+ None,
+ );
return;
};
@@ -1017,9 +1043,6 @@ impl Thread {
tool_use.input.clone(),
);
- let output = tool_result
- .as_ref()
- .and_then(|result| result.output.clone());
if let Some(output) = output.clone() {
// For replay, we use a dummy cancellation receiver since the tool already completed
let (_cancellation_tx, cancellation_rx) = watch::channel(false);
@@ -1036,17 +1059,7 @@ impl Thread {
stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
- .status(
- tool_result
- .as_ref()
- .map_or(acp::ToolCallStatus::Failed, |result| {
- if result.is_error {
- acp::ToolCallStatus::Failed
- } else {
- acp::ToolCallStatus::Completed
- }
- }),
- )
+ .status(status)
.raw_output(output),
None,
);
@@ -2454,13 +2467,14 @@ impl Thread {
}
// When there are duplicate tool names, disambiguate by prefixing them
- // with the server ID. In the rare case there isn't enough space for the
- // disambiguated tool name, keep only the last tool with this name.
+ // with the server ID (converted to snake_case for API compatibility).
+ // In the rare case there isn't enough space for the disambiguated tool
+ // name, keep only the last tool with this name.
for (server_id, tool_name, tool) in context_server_tools {
if duplicate_tool_names.contains(&tool_name) {
let available = MAX_TOOL_NAME_LENGTH.saturating_sub(tool_name.len());
if available >= 2 {
- let mut disambiguated = server_id.0.to_string();
+ let mut disambiguated = server_id.0.to_snake_case();
disambiguated.truncate(available - 1);
disambiguated.push('_');
disambiguated.push_str(&tool_name);
@@ -528,7 +528,7 @@ mod tests {
use crate::tools::{DeletePathTool, EditFileTool, FetchTool, TerminalTool};
use agent_settings::{AgentProfileId, CompiledRegex, InvalidRegexPattern, ToolRules};
use gpui::px;
- use settings::{DefaultAgentView, DockPosition, DockSide, NotifyWhenAgentWaiting};
+ use settings::{DefaultAgentView, DockPosition, NotifyWhenAgentWaiting};
use std::sync::Arc;
fn test_agent_settings(tool_permissions: ToolPermissions) -> AgentSettings {
@@ -536,7 +536,6 @@ mod tests {
enabled: true,
button: true,
dock: DockPosition::Right,
- agents_panel_dock: DockSide::Left,
default_width: px(300.),
default_height: px(600.),
default_model: None,
@@ -380,6 +380,12 @@ impl AnyAgentTool for ContextServerTool {
}
};
+ if response.is_error == Some(true) {
+ let error_message: String =
+ response.content.iter().filter_map(|c| c.text()).collect();
+ anyhow::bail!(error_message);
+ }
+
let mut result = String::new();
for content in response.content {
match content {
@@ -11,7 +11,7 @@ use project::DisableAiSettings;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::{
- DefaultAgentView, DockPosition, DockSide, LanguageModelParameters, LanguageModelSelection,
+ DefaultAgentView, DockPosition, LanguageModelParameters, LanguageModelSelection,
NotifyWhenAgentWaiting, RegisterSetting, Settings, ToolPermissionMode,
};
@@ -26,7 +26,6 @@ pub struct AgentSettings {
pub enabled: bool,
pub button: bool,
pub dock: DockPosition,
- pub agents_panel_dock: DockSide,
pub default_width: Pixels,
pub default_height: Pixels,
pub default_model: Option<LanguageModelSelection>,
@@ -407,7 +406,6 @@ impl Settings for AgentSettings {
enabled: agent.enabled.unwrap(),
button: agent.button.unwrap(),
dock: agent.dock.unwrap(),
- agents_panel_dock: agent.agents_panel_dock.unwrap(),
default_width: px(agent.default_width.unwrap()),
default_height: px(agent.default_height.unwrap()),
default_model: Some(agent.default_model.unwrap()),
@@ -632,36 +632,27 @@ mod tests {
vec![
"Claude 3.7 Sonnet",
"Claude 3.7 Sonnet Thinking",
- "gpt-4.1",
- "gpt-4.1-nano",
+ "gpt-5",
+ "gpt-5-mini",
],
),
- ("openai", vec!["gpt-3.5-turbo", "gpt-4.1", "gpt-4.1-nano"]),
+ ("openai", vec!["gpt-3.5-turbo", "gpt-5", "gpt-5-mini"]),
("ollama", vec!["mistral", "deepseek"]),
]);
// Results should preserve models order whenever possible.
- // In the case below, `zed/gpt-4.1` and `openai/gpt-4.1` have identical
- // similarity scores, but `zed/gpt-4.1` was higher in the models list,
+ // In the case below, `zed/gpt-5-mini` and `openai/gpt-5-mini` have identical
+ // similarity scores, but `zed/gpt-5-mini` was higher in the models list,
// so it should appear first in the results.
- let results = fuzzy_search(models.clone(), "41".into(), cx.executor()).await;
+ let results = fuzzy_search(models.clone(), "mini".into(), cx.executor()).await;
assert_models_eq(
results,
- vec![
- ("zed", vec!["gpt-4.1", "gpt-4.1-nano"]),
- ("openai", vec!["gpt-4.1", "gpt-4.1-nano"]),
- ],
+ vec![("zed", vec!["gpt-5-mini"]), ("openai", vec!["gpt-5-mini"])],
);
- // Fuzzy search
- let results = fuzzy_search(models.clone(), "4n".into(), cx.executor()).await;
- assert_models_eq(
- results,
- vec![
- ("zed", vec!["gpt-4.1-nano"]),
- ("openai", vec!["gpt-4.1-nano"]),
- ],
- );
+ // Fuzzy search - test with specific model name
+ let results = fuzzy_search(models.clone(), "mistral".into(), cx.executor()).await;
+ assert_models_eq(results, vec![("ollama", vec!["mistral"])]);
}
#[gpui::test]
@@ -723,7 +723,7 @@ impl AcpServerView {
});
}
- let mut subscriptions = vec![
+ let subscriptions = vec![
cx.subscribe_in(&thread, window, Self::handle_thread_event),
cx.observe(&action_log, |_, _, cx| cx.notify()),
];
@@ -755,18 +755,6 @@ impl AcpServerView {
.detach();
}
- let title_editor = if thread.update(cx, |thread, cx| thread.can_set_title(cx)) {
- let editor = cx.new(|cx| {
- let mut editor = Editor::single_line(window, cx);
- editor.set_text(thread.read(cx).title(), window, cx);
- editor
- });
- subscriptions.push(cx.subscribe_in(&editor, window, Self::handle_title_editor_event));
- Some(editor)
- } else {
- None
- };
-
let profile_selector: Option<Rc<agent::NativeAgentConnection>> =
connection.clone().downcast();
let profile_selector = profile_selector
@@ -802,7 +790,6 @@ impl AcpServerView {
agent_display_name,
self.workspace.clone(),
entry_view_state,
- title_editor,
config_options_view,
mode_selector,
model_selector,
@@ -984,20 +971,6 @@ impl AcpServerView {
}
}
- pub fn handle_title_editor_event(
- &mut self,
- title_editor: &Entity<Editor>,
- event: &EditorEvent,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- if let Some(active) = self.active_thread() {
- active.update(cx, |active, cx| {
- active.handle_title_editor_event(title_editor, event, window, cx);
- });
- }
- }
-
pub fn is_loading(&self) -> bool {
matches!(self.server_state, ServerState::Loading { .. })
}
@@ -1181,10 +1154,8 @@ impl AcpServerView {
}
AcpThreadEvent::TitleUpdated => {
let title = thread.read(cx).title();
- if let Some(title_editor) = self
- .thread_view(&thread_id)
- .and_then(|active| active.read(cx).title_editor.clone())
- {
+ if let Some(active_thread) = self.thread_view(&thread_id) {
+ let title_editor = active_thread.read(cx).title_editor.clone();
title_editor.update(cx, |editor, cx| {
if editor.text(cx) != title {
editor.set_text(title, window, cx);
@@ -2180,30 +2151,40 @@ impl AcpServerView {
self.show_notification(caption, icon, window, cx);
}
- fn agent_is_visible(&self, window: &Window, cx: &App) -> bool {
- if window.is_window_active() {
- let workspace_is_foreground = window
- .root::<MultiWorkspace>()
- .flatten()
- .and_then(|mw| {
- let mw = mw.read(cx);
- self.workspace.upgrade().map(|ws| mw.workspace() == &ws)
- })
- .unwrap_or(true);
+ fn agent_panel_visible(&self, multi_workspace: &Entity<MultiWorkspace>, cx: &App) -> bool {
+ let Some(workspace) = self.workspace.upgrade() else {
+ return false;
+ };
- if workspace_is_foreground {
- if let Some(workspace) = self.workspace.upgrade() {
- return AgentPanel::is_visible(&workspace, cx);
- }
- }
+ multi_workspace.read(cx).workspace() == &workspace && AgentPanel::is_visible(&workspace, cx)
+ }
+
+ fn agent_status_visible(&self, window: &Window, cx: &App) -> bool {
+ if !window.is_window_active() {
+ return false;
}
- false
+ if let Some(multi_workspace) = window.root::<MultiWorkspace>().flatten() {
+ multi_workspace.read(cx).is_sidebar_open()
+ || self.agent_panel_visible(&multi_workspace, cx)
+ } else {
+ self.workspace
+ .upgrade()
+ .is_some_and(|workspace| AgentPanel::is_visible(&workspace, cx))
+ }
}
fn play_notification_sound(&self, window: &Window, cx: &mut App) {
let settings = AgentSettings::get_global(cx);
- if settings.play_sound_when_agent_done && !self.agent_is_visible(window, cx) {
+ let visible = window.is_window_active()
+ && if let Some(mw) = window.root::<MultiWorkspace>().flatten() {
+ self.agent_panel_visible(&mw, cx)
+ } else {
+ self.workspace
+ .upgrade()
+ .is_some_and(|workspace| AgentPanel::is_visible(&workspace, cx))
+ };
+ if settings.play_sound_when_agent_done && !visible {
Audio::play_sound(Sound::AgentDone, cx);
}
}
@@ -2221,7 +2202,7 @@ impl AcpServerView {
let settings = AgentSettings::get_global(cx);
- let should_notify = !self.agent_is_visible(window, cx);
+ let should_notify = !self.agent_status_visible(window, cx);
if !should_notify {
return;
@@ -2325,7 +2306,7 @@ impl AcpServerView {
let pop_up_weak = pop_up.downgrade();
cx.observe_window_activation(window, move |this, window, cx| {
- if this.agent_is_visible(window, cx)
+ if this.agent_status_visible(window, cx)
&& let Some(pop_up) = pop_up_weak.upgrade()
{
pop_up.update(cx, |notification, cx| {
@@ -5799,4 +5780,49 @@ pub(crate) mod tests {
"Missing deny pattern option"
);
}
+
+ #[gpui::test]
+ async fn test_manually_editing_title_updates_acp_thread_title(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
+
+ let active = active_thread(&thread_view, cx);
+ let title_editor = cx.read(|cx| active.read(cx).title_editor.clone());
+ let thread = cx.read(|cx| active.read(cx).thread.clone());
+
+ title_editor.read_with(cx, |editor, cx| {
+ assert!(!editor.read_only(cx));
+ });
+
+ title_editor.update_in(cx, |editor, window, cx| {
+ editor.set_text("My Custom Title", window, cx);
+ });
+ cx.run_until_parked();
+
+ title_editor.read_with(cx, |editor, cx| {
+ assert_eq!(editor.text(cx), "My Custom Title");
+ });
+ thread.read_with(cx, |thread, _cx| {
+ assert_eq!(thread.title().as_ref(), "My Custom Title");
+ });
+ }
+
+ #[gpui::test]
+ async fn test_title_editor_is_read_only_when_set_title_unsupported(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let (thread_view, cx) =
+ setup_thread_view(StubAgentServer::new(ResumeOnlyAgentConnection), cx).await;
+
+ let active = active_thread(&thread_view, cx);
+ let title_editor = cx.read(|cx| active.read(cx).title_editor.clone());
+
+ title_editor.read_with(cx, |editor, cx| {
+ assert!(
+ editor.read_only(cx),
+ "Title editor should be read-only when the connection does not support set_title"
+ );
+ });
+ }
}
@@ -176,7 +176,7 @@ pub struct AcpThreadView {
pub focus_handle: FocusHandle,
pub workspace: WeakEntity<Workspace>,
pub entry_view_state: Entity<EntryViewState>,
- pub title_editor: Option<Entity<Editor>>,
+ pub title_editor: Entity<Editor>,
pub config_options_view: Option<Entity<ConfigOptionsView>>,
pub mode_selector: Option<Entity<ModeSelector>>,
pub model_selector: Option<Entity<AcpModelSelectorPopover>>,
@@ -266,7 +266,6 @@ impl AcpThreadView {
agent_display_name: SharedString,
workspace: WeakEntity<Workspace>,
entry_view_state: Entity<EntryViewState>,
- title_editor: Option<Entity<Editor>>,
config_options_view: Option<Entity<ConfigOptionsView>>,
mode_selector: Option<Entity<ModeSelector>>,
model_selector: Option<Entity<AcpModelSelectorPopover>>,
@@ -332,6 +331,18 @@ impl AcpThreadView {
&& project.upgrade().is_some_and(|p| p.read(cx).is_local())
&& agent_name == "Codex";
+ let title_editor = {
+ let can_edit = thread.update(cx, |thread, cx| thread.can_set_title(cx));
+ let editor = cx.new(|cx| {
+ let mut editor = Editor::single_line(window, cx);
+ editor.set_text(thread.read(cx).title(), window, cx);
+ editor.set_read_only(!can_edit);
+ editor
+ });
+ subscriptions.push(cx.subscribe_in(&editor, window, Self::handle_title_editor_event));
+ editor
+ };
+
subscriptions.push(cx.subscribe_in(
&entry_view_state,
window,
@@ -2303,9 +2314,10 @@ impl AcpThreadView {
return None;
};
- let title = self.thread.read(cx).title();
let server_view = self.server_view.clone();
+ let is_done = self.thread.read(cx).status() == ThreadStatus::Idle;
+
Some(
h_flex()
.h(Tab::container_height(cx))
@@ -2313,10 +2325,24 @@ impl AcpThreadView {
.pr_1p5()
.w_full()
.justify_between()
+ .gap_1()
.border_b_1()
- .border_color(cx.theme().colors().border_variant)
+ .border_color(cx.theme().colors().border)
.bg(cx.theme().colors().editor_background.opacity(0.2))
- .child(Label::new(title).color(Color::Muted))
+ .child(
+ h_flex()
+ .flex_1()
+ .gap_2()
+ .child(
+ Icon::new(IconName::ForwardArrowUp)
+ .size(IconSize::Small)
+ .color(Color::Muted),
+ )
+ .child(self.title_editor.clone())
+ .when(is_done, |this| {
+ this.child(Icon::new(IconName::Check).color(Color::Success))
+ }),
+ )
.child(
IconButton::new("minimize_subagent", IconName::Minimize)
.icon_size(IconSize::Small)
@@ -3616,6 +3642,8 @@ impl AcpThreadView {
if let Some(editing_index) = self.editing_message
&& editing_index < entry_ix
{
+ let is_subagent = self.is_subagent();
+
let backdrop = div()
.id(("backdrop", entry_ix))
.size_full()
@@ -3629,7 +3657,7 @@ impl AcpThreadView {
div()
.relative()
.child(primary)
- .child(backdrop)
+ .when(!is_subagent, |this| this.child(backdrop))
.into_any_element()
} else {
primary
@@ -5876,7 +5904,7 @@ impl AcpThreadView {
if is_canceled_or_failed {
"Subagent Canceled"
} else {
- "Spawning Subagent…"
+ "Creating Subagent…"
}
.into()
});
@@ -117,7 +117,7 @@ impl ModelInput {
let model_name = single_line_input(
"Model Name",
- "e.g. gpt-4o, claude-opus-4, gemini-2.5-pro",
+ "e.g. gpt-5, claude-opus-4, gemini-2.5-pro",
None,
base_tab_index + 1,
window,
@@ -1954,7 +1954,7 @@ impl AgentPanel {
if let Some(title_editor) = thread_view
.read(cx)
.parent_thread(cx)
- .and_then(|r| r.read(cx).title_editor.clone())
+ .map(|r| r.read(cx).title_editor.clone())
{
let container = div()
.w_full()
@@ -418,9 +418,6 @@ fn update_command_palette_filter(cx: &mut App) {
filter.show_namespace("zed_predict_onboarding");
filter.show_action_types(&[TypeId::of::<zed_actions::OpenZedPredictOnboarding>()]);
- if !agent_v2_enabled {
- filter.hide_action_types(&[TypeId::of::<zed_actions::agent::ToggleAgentPane>()]);
- }
}
if agent_v2_enabled {
@@ -526,7 +523,7 @@ mod tests {
use gpui::{BorrowAppContext, TestAppContext, px};
use project::DisableAiSettings;
use settings::{
- DefaultAgentView, DockPosition, DockSide, NotifyWhenAgentWaiting, Settings, SettingsStore,
+ DefaultAgentView, DockPosition, NotifyWhenAgentWaiting, Settings, SettingsStore,
};
#[gpui::test]
@@ -545,7 +542,6 @@ mod tests {
enabled: true,
button: true,
dock: DockPosition::Right,
- agents_panel_dock: DockSide::Left,
default_width: px(300.),
default_height: px(600.),
default_model: None,
@@ -752,11 +752,11 @@ mod tests {
let models = create_models(vec![
("zed", "Claude 3.7 Sonnet"),
("zed", "Claude 3.7 Sonnet Thinking"),
- ("zed", "gpt-4.1"),
- ("zed", "gpt-4.1-nano"),
+ ("zed", "gpt-5"),
+ ("zed", "gpt-5-mini"),
("openai", "gpt-3.5-turbo"),
- ("openai", "gpt-4.1"),
- ("openai", "gpt-4.1-nano"),
+ ("openai", "gpt-5"),
+ ("openai", "gpt-5-mini"),
("ollama", "mistral"),
("ollama", "deepseek"),
]);
@@ -767,14 +767,14 @@ mod tests {
);
// The order of models should be maintained, case doesn't matter
- let results = matcher.exact_search("GPT-4.1");
+ let results = matcher.exact_search("GPT-5");
assert_models_eq(
results,
vec![
- "zed/gpt-4.1",
- "zed/gpt-4.1-nano",
- "openai/gpt-4.1",
- "openai/gpt-4.1-nano",
+ "zed/gpt-5",
+ "zed/gpt-5-mini",
+ "openai/gpt-5",
+ "openai/gpt-5-mini",
],
);
}
@@ -784,11 +784,11 @@ mod tests {
let models = create_models(vec![
("zed", "Claude 3.7 Sonnet"),
("zed", "Claude 3.7 Sonnet Thinking"),
- ("zed", "gpt-4.1"),
- ("zed", "gpt-4.1-nano"),
+ ("zed", "gpt-5"),
+ ("zed", "gpt-5-mini"),
("openai", "gpt-3.5-turbo"),
- ("openai", "gpt-4.1"),
- ("openai", "gpt-4.1-nano"),
+ ("openai", "gpt-5"),
+ ("openai", "gpt-5-mini"),
("ollama", "mistral"),
("ollama", "deepseek"),
]);
@@ -799,27 +799,19 @@ mod tests {
);
// Results should preserve models order whenever possible.
- // In the case below, `zed/gpt-4.1` and `openai/gpt-4.1` have identical
- // similarity scores, but `zed/gpt-4.1` was higher in the models list,
+ // In the case below, `zed/gpt-5-mini` and `openai/gpt-5-mini` have identical
+ // similarity scores, but `zed/gpt-5-mini` was higher in the models list,
// so it should appear first in the results.
- let results = matcher.fuzzy_search("41");
- assert_models_eq(
- results,
- vec![
- "zed/gpt-4.1",
- "openai/gpt-4.1",
- "zed/gpt-4.1-nano",
- "openai/gpt-4.1-nano",
- ],
- );
+ let results = matcher.fuzzy_search("mini");
+ assert_models_eq(results, vec!["zed/gpt-5-mini", "openai/gpt-5-mini"]);
// Model provider should be searchable as well
let results = matcher.fuzzy_search("ol"); // meaning "ollama"
assert_models_eq(results, vec!["ollama/mistral", "ollama/deepseek"]);
- // Fuzzy search
- let results = matcher.fuzzy_search("z4n");
- assert_models_eq(results, vec!["zed/gpt-4.1-nano"]);
+ // Fuzzy search - search for Claude to get the Thinking variant
+ let results = matcher.fuzzy_search("thinking");
+ assert_models_eq(results, vec!["zed/Claude 3.7 Sonnet Thinking"]);
}
#[gpui::test]
@@ -1,42 +0,0 @@
-[package]
-name = "agent_ui_v2"
-version = "0.1.0"
-edition.workspace = true
-publish.workspace = true
-license = "GPL-3.0-or-later"
-
-[lints]
-workspace = true
-
-[lib]
-path = "src/agent_ui_v2.rs"
-doctest = false
-
-[features]
-test-support = ["agent/test-support"]
-
-
-[dependencies]
-agent.workspace = true
-acp_thread.workspace = true
-agent-client-protocol.workspace = true
-agent_servers.workspace = true
-agent_settings.workspace = true
-agent_ui.workspace = true
-anyhow.workspace = true
-db.workspace = true
-feature_flags.workspace = true
-fs.workspace = true
-gpui.workspace = true
-log.workspace = true
-project.workspace = true
-prompt_store.workspace = true
-serde.workspace = true
-serde_json.workspace = true
-settings.workspace = true
-ui.workspace = true
-util.workspace = true
-workspace.workspace = true
-
-[dev-dependencies]
-agent = { workspace = true, features = ["test-support"] }
@@ -1 +0,0 @@
-../../LICENSE-GPL
@@ -1,284 +0,0 @@
-use acp_thread::AgentSessionInfo;
-use agent::{NativeAgentServer, ThreadStore};
-use agent_client_protocol as acp;
-use agent_servers::AgentServer;
-use agent_settings::AgentSettings;
-use agent_ui::acp::{AcpServerView, AcpThreadHistory};
-use fs::Fs;
-use gpui::{
- Entity, EventEmitter, Focusable, Pixels, SharedString, Subscription, WeakEntity, prelude::*,
-};
-use project::Project;
-use prompt_store::PromptStore;
-use serde::{Deserialize, Serialize};
-use settings::DockSide;
-use settings::Settings as _;
-use std::rc::Rc;
-use std::sync::Arc;
-use ui::{Tab, Tooltip, prelude::*};
-use workspace::{
- Workspace,
- dock::{ClosePane, MinimizePane, UtilityPane, UtilityPanePosition},
- utility_pane::UtilityPaneSlot,
-};
-
-pub const DEFAULT_UTILITY_PANE_WIDTH: Pixels = gpui::px(400.0);
-
-#[derive(Serialize, Deserialize, Debug, Clone)]
-pub enum SerializedHistoryEntryId {
- AcpThread(String),
-}
-
-impl From<acp::SessionId> for SerializedHistoryEntryId {
- fn from(id: acp::SessionId) -> Self {
- SerializedHistoryEntryId::AcpThread(id.0.to_string())
- }
-}
-
-#[derive(Serialize, Deserialize, Debug)]
-pub struct SerializedAgentThreadPane {
- pub expanded: bool,
- pub width: Option<Pixels>,
- pub thread_id: Option<SerializedHistoryEntryId>,
-}
-
-pub enum AgentsUtilityPaneEvent {
- StateChanged,
-}
-
-impl EventEmitter<AgentsUtilityPaneEvent> for AgentThreadPane {}
-impl EventEmitter<MinimizePane> for AgentThreadPane {}
-impl EventEmitter<ClosePane> for AgentThreadPane {}
-
-struct ActiveThreadView {
- view: Entity<AcpServerView>,
- thread_id: acp::SessionId,
- _notify: Subscription,
-}
-
-pub struct AgentThreadPane {
- focus_handle: gpui::FocusHandle,
- expanded: bool,
- width: Option<Pixels>,
- thread_view: Option<ActiveThreadView>,
- workspace: WeakEntity<Workspace>,
- history: Entity<AcpThreadHistory>,
-}
-
-impl AgentThreadPane {
- pub fn new(
- workspace: WeakEntity<Workspace>,
- history: Entity<AcpThreadHistory>,
- cx: &mut ui::Context<Self>,
- ) -> Self {
- let focus_handle = cx.focus_handle();
- Self {
- focus_handle,
- expanded: false,
- width: None,
- thread_view: None,
- workspace,
- history,
- }
- }
-
- pub fn thread_id(&self) -> Option<acp::SessionId> {
- self.thread_view.as_ref().map(|tv| tv.thread_id.clone())
- }
-
- pub fn serialize(&self) -> SerializedAgentThreadPane {
- SerializedAgentThreadPane {
- expanded: self.expanded,
- width: self.width,
- thread_id: self.thread_id().map(SerializedHistoryEntryId::from),
- }
- }
-
- pub fn open_thread(
- &mut self,
- entry: AgentSessionInfo,
- fs: Arc<dyn Fs>,
- workspace: WeakEntity<Workspace>,
- project: Entity<Project>,
- thread_store: Entity<ThreadStore>,
- prompt_store: Option<Entity<PromptStore>>,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- let thread_id = entry.session_id.clone();
- let resume_thread = Some(entry);
-
- let agent: Rc<dyn AgentServer> = Rc::new(NativeAgentServer::new(fs, thread_store.clone()));
-
- let history = self.history.clone();
- let thread_view = cx.new(|cx| {
- AcpServerView::new(
- agent,
- resume_thread,
- None,
- workspace,
- project,
- Some(thread_store),
- prompt_store,
- history,
- window,
- cx,
- )
- });
-
- let notify = cx.observe(&thread_view, |_, _, cx| {
- cx.notify();
- });
-
- self.thread_view = Some(ActiveThreadView {
- view: thread_view,
- thread_id,
- _notify: notify,
- });
-
- cx.notify();
- }
-
- fn title(&self, cx: &App) -> SharedString {
- if let Some(active_thread_view) = &self.thread_view {
- let thread_view = active_thread_view.view.read(cx);
- if let Some(ready) = thread_view.active_thread() {
- let title = ready.read(cx).thread.read(cx).title();
- if !title.is_empty() {
- return title;
- }
- }
- thread_view.title(cx)
- } else {
- "Thread".into()
- }
- }
-
- fn render_header(&self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
- let position = self.position(window, cx);
- let slot = match position {
- UtilityPanePosition::Left => UtilityPaneSlot::Left,
- UtilityPanePosition::Right => UtilityPaneSlot::Right,
- };
-
- let workspace = self.workspace.clone();
- let toggle_icon = self.toggle_icon(cx);
- let title = self.title(cx);
-
- let pane_toggle_button = |workspace: WeakEntity<Workspace>| {
- IconButton::new("toggle_utility_pane", toggle_icon)
- .icon_size(IconSize::Small)
- .tooltip(Tooltip::text("Toggle Agent Pane"))
- .on_click(move |_, window, cx| {
- workspace
- .update(cx, |workspace, cx| {
- workspace.toggle_utility_pane(slot, window, cx)
- })
- .ok();
- })
- };
-
- h_flex()
- .id("utility-pane-header")
- .w_full()
- .h(Tab::container_height(cx))
- .px_1p5()
- .gap(DynamicSpacing::Base06.rems(cx))
- .when(slot == UtilityPaneSlot::Right, |this| {
- this.flex_row_reverse()
- })
- .flex_none()
- .border_b_1()
- .border_color(cx.theme().colors().border)
- .child(pane_toggle_button(workspace))
- .child(
- h_flex()
- .size_full()
- .min_w_0()
- .gap_1()
- .map(|this| {
- if slot == UtilityPaneSlot::Right {
- this.flex_row_reverse().justify_start()
- } else {
- this.justify_between()
- }
- })
- .child(Label::new(title).truncate())
- .child(
- IconButton::new("close_btn", IconName::Close)
- .icon_size(IconSize::Small)
- .tooltip(Tooltip::text("Close Agent Pane"))
- .on_click(cx.listener(|this, _: &gpui::ClickEvent, _window, cx| {
- cx.emit(ClosePane);
- this.thread_view = None;
- cx.notify()
- })),
- ),
- )
- }
-}
-
-impl Focusable for AgentThreadPane {
- fn focus_handle(&self, cx: &ui::App) -> gpui::FocusHandle {
- if let Some(thread_view) = &self.thread_view {
- thread_view.view.focus_handle(cx)
- } else {
- self.focus_handle.clone()
- }
- }
-}
-
-impl UtilityPane for AgentThreadPane {
- fn position(&self, _window: &Window, cx: &App) -> UtilityPanePosition {
- match AgentSettings::get_global(cx).agents_panel_dock {
- DockSide::Left => UtilityPanePosition::Left,
- DockSide::Right => UtilityPanePosition::Right,
- }
- }
-
- fn toggle_icon(&self, _cx: &App) -> IconName {
- IconName::Thread
- }
-
- fn expanded(&self, _cx: &App) -> bool {
- self.expanded
- }
-
- fn set_expanded(&mut self, expanded: bool, cx: &mut Context<Self>) {
- self.expanded = expanded;
- cx.emit(AgentsUtilityPaneEvent::StateChanged);
- cx.notify();
- }
-
- fn width(&self, _cx: &App) -> Pixels {
- self.width.unwrap_or(DEFAULT_UTILITY_PANE_WIDTH)
- }
-
- fn set_width(&mut self, width: Option<Pixels>, cx: &mut Context<Self>) {
- self.width = width;
- cx.emit(AgentsUtilityPaneEvent::StateChanged);
- cx.notify();
- }
-}
-
-impl Render for AgentThreadPane {
- fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
- let content = if let Some(thread_view) = &self.thread_view {
- div().size_full().child(thread_view.view.clone())
- } else {
- div()
- .size_full()
- .flex()
- .items_center()
- .justify_center()
- .child(Label::new("Select a thread to view details").size(LabelSize::Default))
- };
-
- div()
- .size_full()
- .flex()
- .flex_col()
- .child(self.render_header(window, cx))
- .child(content)
- }
-}
@@ -1,3 +0,0 @@
-mod agent_thread_pane;
-
-pub mod agents_panel;
@@ -1,481 +0,0 @@
-use acp_thread::AgentSessionInfo;
-use agent::{NativeAgentServer, ThreadStore};
-use agent_client_protocol as acp;
-use agent_servers::{AgentServer, AgentServerDelegate};
-use agent_settings::AgentSettings;
-use anyhow::Result;
-use db::kvp::KEY_VALUE_STORE;
-use feature_flags::{AgentV2FeatureFlag, FeatureFlagAppExt};
-use fs::Fs;
-use gpui::{
- Action, AsyncWindowContext, Entity, EventEmitter, Focusable, Pixels, Subscription, Task,
- WeakEntity, actions, prelude::*,
-};
-use project::Project;
-use prompt_store::PromptStore;
-use serde::{Deserialize, Serialize};
-use settings::{Settings as _, update_settings_file};
-use std::sync::Arc;
-use ui::{App, Context, IconName, IntoElement, ParentElement, Render, Styled, Window};
-use util::ResultExt;
-use workspace::{
- Panel, Workspace,
- dock::{ClosePane, DockPosition, PanelEvent, UtilityPane},
- utility_pane::{UtilityPaneSlot, utility_slot_for_dock_position},
-};
-
-use crate::agent_thread_pane::{
- AgentThreadPane, AgentsUtilityPaneEvent, SerializedAgentThreadPane, SerializedHistoryEntryId,
-};
-use agent_ui::acp::{AcpThreadHistory, ThreadHistoryEvent};
-
-const AGENTS_PANEL_KEY: &str = "agents_panel";
-
-#[derive(Serialize, Deserialize, Debug)]
-struct SerializedAgentsPanel {
- width: Option<Pixels>,
- pane: Option<SerializedAgentThreadPane>,
-}
-
-actions!(
- agents,
- [
- /// Toggle the visibility of the agents panel.
- ToggleAgentsPanel
- ]
-);
-
-pub fn init(cx: &mut App) {
- cx.observe_new(|workspace: &mut Workspace, _, _| {
- workspace.register_action(|workspace, _: &ToggleAgentsPanel, window, cx| {
- workspace.toggle_panel_focus::<AgentsPanel>(window, cx);
- });
- })
- .detach();
-}
-
-pub struct AgentsPanel {
- focus_handle: gpui::FocusHandle,
- workspace: WeakEntity<Workspace>,
- project: Entity<Project>,
- agent_thread_pane: Option<Entity<AgentThreadPane>>,
- history: Entity<AcpThreadHistory>,
- thread_store: Entity<ThreadStore>,
- prompt_store: Option<Entity<PromptStore>>,
- fs: Arc<dyn Fs>,
- width: Option<Pixels>,
- pending_restore: Option<SerializedAgentThreadPane>,
- pending_serialization: Task<Option<()>>,
- _subscriptions: Vec<Subscription>,
-}
-
-impl AgentsPanel {
- pub fn load(
- workspace: WeakEntity<Workspace>,
- cx: AsyncWindowContext,
- ) -> Task<Result<Entity<Self>, anyhow::Error>> {
- cx.spawn(async move |cx| {
- let serialized_panel = cx
- .background_spawn(async move {
- KEY_VALUE_STORE
- .read_kvp(AGENTS_PANEL_KEY)
- .ok()
- .flatten()
- .and_then(|panel| {
- serde_json::from_str::<SerializedAgentsPanel>(&panel).ok()
- })
- })
- .await;
-
- let (fs, project) = workspace.update(cx, |workspace, _| {
- let fs = workspace.app_state().fs.clone();
- let project = workspace.project().clone();
- (fs, project)
- })?;
-
- let prompt_store = workspace
- .update(cx, |_, cx| PromptStore::global(cx))?
- .await
- .log_err();
-
- workspace.update_in(cx, |_, window, cx| {
- cx.new(|cx| {
- let mut panel =
- Self::new(workspace.clone(), fs, project, prompt_store, window, cx);
- if let Some(serialized_panel) = serialized_panel {
- panel.width = serialized_panel.width;
- if let Some(serialized_pane) = serialized_panel.pane {
- panel.restore_utility_pane(serialized_pane, window, cx);
- }
- }
- panel
- })
- })
- })
- }
-
- fn new(
- workspace: WeakEntity<Workspace>,
- fs: Arc<dyn Fs>,
- project: Entity<Project>,
- prompt_store: Option<Entity<PromptStore>>,
- window: &mut Window,
- cx: &mut ui::Context<Self>,
- ) -> Self {
- let focus_handle = cx.focus_handle();
-
- let thread_store = ThreadStore::global(cx);
- let history = cx.new(|cx| AcpThreadHistory::new(None, window, cx));
-
- let history_handle = history.clone();
- let connect_project = project.clone();
- let connect_thread_store = thread_store.clone();
- let connect_fs = fs.clone();
- cx.spawn(async move |_, cx| {
- let connect_task = cx.update(|cx| {
- let delegate = AgentServerDelegate::new(
- connect_project.read(cx).agent_server_store().clone(),
- connect_project.clone(),
- None,
- None,
- );
- let server = NativeAgentServer::new(connect_fs, connect_thread_store);
- server.connect(None, delegate, cx)
- });
- let connection = match connect_task.await {
- Ok((connection, _)) => connection,
- Err(error) => {
- log::error!("Failed to connect native agent for history: {error:#}");
- return;
- }
- };
-
- cx.update(|cx| {
- if connection.supports_session_history(cx)
- && let Some(session_list) = connection.session_list(cx)
- {
- history_handle.update(cx, |history, cx| {
- history.set_session_list(Some(session_list), cx);
- });
- }
- });
- })
- .detach();
-
- let this = cx.weak_entity();
- let subscriptions = vec![
- cx.subscribe_in(&history, window, Self::handle_history_event),
- cx.observe_in(&history, window, Self::handle_history_updated),
- cx.on_flags_ready(move |_, cx| {
- this.update(cx, |_, cx| {
- cx.notify();
- })
- .ok();
- }),
- ];
-
- Self {
- focus_handle,
- workspace,
- project,
- agent_thread_pane: None,
- history,
- thread_store,
- prompt_store,
- fs,
- width: None,
- pending_restore: None,
- pending_serialization: Task::ready(None),
- _subscriptions: subscriptions,
- }
- }
-
- fn restore_utility_pane(
- &mut self,
- serialized_pane: SerializedAgentThreadPane,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- let Some(thread_id) = &serialized_pane.thread_id else {
- return;
- };
-
- let SerializedHistoryEntryId::AcpThread(id) = thread_id;
- let session_id = acp::SessionId::new(id.clone());
- if let Some(entry) = self.history.read(cx).session_for_id(&session_id) {
- self.open_thread(
- entry,
- serialized_pane.expanded,
- serialized_pane.width,
- window,
- cx,
- );
- } else {
- self.pending_restore = Some(serialized_pane);
- }
- }
-
- fn handle_utility_pane_event(
- &mut self,
- _utility_pane: Entity<AgentThreadPane>,
- event: &AgentsUtilityPaneEvent,
- cx: &mut Context<Self>,
- ) {
- match event {
- AgentsUtilityPaneEvent::StateChanged => {
- self.serialize(cx);
- cx.notify();
- }
- }
- }
-
- fn handle_close_pane_event(
- &mut self,
- _utility_pane: Entity<AgentThreadPane>,
- _event: &ClosePane,
- cx: &mut Context<Self>,
- ) {
- self.agent_thread_pane = None;
- self.serialize(cx);
- cx.notify();
- }
-
- fn handle_history_updated(
- &mut self,
- _history: Entity<AcpThreadHistory>,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- self.maybe_restore_pending(window, cx);
- }
-
- fn handle_history_event(
- &mut self,
- _history: &Entity<AcpThreadHistory>,
- event: &ThreadHistoryEvent,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- match event {
- ThreadHistoryEvent::Open(entry) => {
- self.open_thread(entry.clone(), true, None, window, cx);
- }
- }
- }
-
- fn maybe_restore_pending(&mut self, window: &mut Window, cx: &mut Context<Self>) {
- if self.agent_thread_pane.is_some() {
- self.pending_restore = None;
- return;
- }
-
- let Some(pending) = self.pending_restore.as_ref() else {
- return;
- };
- let Some(thread_id) = &pending.thread_id else {
- self.pending_restore = None;
- return;
- };
-
- let SerializedHistoryEntryId::AcpThread(id) = thread_id;
- let session_id = acp::SessionId::new(id.clone());
- let Some(entry) = self.history.read(cx).session_for_id(&session_id) else {
- return;
- };
-
- let pending = self.pending_restore.take().expect("pending restore");
- self.open_thread(entry, pending.expanded, pending.width, window, cx);
- }
-
- fn open_thread(
- &mut self,
- entry: AgentSessionInfo,
- expanded: bool,
- width: Option<Pixels>,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- let entry_id = entry.session_id.clone();
- self.pending_restore = None;
-
- if let Some(existing_pane) = &self.agent_thread_pane {
- if existing_pane.read(cx).thread_id() == Some(entry_id) {
- existing_pane.update(cx, |pane, cx| {
- pane.set_expanded(true, cx);
- });
- return;
- }
- }
-
- let fs = self.fs.clone();
- let workspace = self.workspace.clone();
- let project = self.project.clone();
- let thread_store = self.thread_store.clone();
- let prompt_store = self.prompt_store.clone();
- let history = self.history.clone();
-
- let agent_thread_pane = cx.new(|cx| {
- let mut pane = AgentThreadPane::new(workspace.clone(), history, cx);
- pane.open_thread(
- entry,
- fs,
- workspace.clone(),
- project,
- thread_store,
- prompt_store,
- window,
- cx,
- );
- if let Some(width) = width {
- pane.set_width(Some(width), cx);
- }
- pane.set_expanded(expanded, cx);
- pane
- });
-
- let state_subscription = cx.subscribe(&agent_thread_pane, Self::handle_utility_pane_event);
- let close_subscription = cx.subscribe(&agent_thread_pane, Self::handle_close_pane_event);
-
- self._subscriptions.push(state_subscription);
- self._subscriptions.push(close_subscription);
-
- let slot = self.utility_slot(window, cx);
- let panel_id = cx.entity_id();
-
- if let Some(workspace) = self.workspace.upgrade() {
- workspace.update(cx, |workspace, cx| {
- workspace.register_utility_pane(slot, panel_id, agent_thread_pane.clone(), cx);
- });
- }
-
- self.agent_thread_pane = Some(agent_thread_pane);
- self.serialize(cx);
- cx.notify();
- }
-
- fn utility_slot(&self, window: &Window, cx: &App) -> UtilityPaneSlot {
- let position = self.position(window, cx);
- utility_slot_for_dock_position(position)
- }
-
- fn re_register_utility_pane(&mut self, window: &mut Window, cx: &mut Context<Self>) {
- if let Some(pane) = &self.agent_thread_pane {
- let slot = self.utility_slot(window, cx);
- let panel_id = cx.entity_id();
- let pane = pane.clone();
-
- if let Some(workspace) = self.workspace.upgrade() {
- workspace.update(cx, |workspace, cx| {
- workspace.register_utility_pane(slot, panel_id, pane, cx);
- });
- }
- }
- }
-
- fn serialize(&mut self, cx: &mut Context<Self>) {
- let width = self.width;
- let pane = self
- .agent_thread_pane
- .as_ref()
- .map(|pane| pane.read(cx).serialize());
-
- self.pending_serialization = cx.background_spawn(async move {
- KEY_VALUE_STORE
- .write_kvp(
- AGENTS_PANEL_KEY.into(),
- serde_json::to_string(&SerializedAgentsPanel { width, pane }).unwrap(),
- )
- .await
- .log_err()
- });
- }
-}
-
-impl EventEmitter<PanelEvent> for AgentsPanel {}
-
-impl Focusable for AgentsPanel {
- fn focus_handle(&self, _cx: &ui::App) -> gpui::FocusHandle {
- self.focus_handle.clone()
- }
-}
-
-impl Panel for AgentsPanel {
- fn persistent_name() -> &'static str {
- "AgentsPanel"
- }
-
- fn panel_key() -> &'static str {
- AGENTS_PANEL_KEY
- }
-
- fn position(&self, _window: &Window, cx: &App) -> DockPosition {
- match AgentSettings::get_global(cx).agents_panel_dock {
- settings::DockSide::Left => DockPosition::Left,
- settings::DockSide::Right => DockPosition::Right,
- }
- }
-
- fn position_is_valid(&self, position: DockPosition) -> bool {
- position != DockPosition::Bottom
- }
-
- fn set_position(
- &mut self,
- position: DockPosition,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- update_settings_file(self.fs.clone(), cx, move |settings, _| {
- settings.agent.get_or_insert_default().agents_panel_dock = Some(match position {
- DockPosition::Left => settings::DockSide::Left,
- DockPosition::Right | DockPosition::Bottom => settings::DockSide::Right,
- });
- });
- self.re_register_utility_pane(window, cx);
- }
-
- fn size(&self, window: &Window, cx: &App) -> Pixels {
- let settings = AgentSettings::get_global(cx);
- match self.position(window, cx) {
- DockPosition::Left | DockPosition::Right => {
- self.width.unwrap_or(settings.default_width)
- }
- DockPosition::Bottom => self.width.unwrap_or(settings.default_height),
- }
- }
-
- fn set_size(&mut self, size: Option<Pixels>, window: &mut Window, cx: &mut Context<Self>) {
- match self.position(window, cx) {
- DockPosition::Left | DockPosition::Right => self.width = size,
- DockPosition::Bottom => {}
- }
- self.serialize(cx);
- cx.notify();
- }
-
- fn icon(&self, _window: &Window, cx: &App) -> Option<IconName> {
- (self.enabled(cx) && AgentSettings::get_global(cx).button).then_some(IconName::ZedAgentTwo)
- }
-
- fn icon_tooltip(&self, _window: &Window, _cx: &App) -> Option<&'static str> {
- Some("Agents Panel")
- }
-
- fn toggle_action(&self) -> Box<dyn Action> {
- Box::new(ToggleAgentsPanel)
- }
-
- fn activation_priority(&self) -> u32 {
- 4
- }
-
- fn enabled(&self, cx: &App) -> bool {
- AgentSettings::get_global(cx).enabled(cx) && cx.has_flag::<AgentV2FeatureFlag>()
- }
-}
-
-impl Render for AgentsPanel {
- fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
- gpui::div().size_full().child(self.history.clone())
- }
-}
@@ -16,7 +16,8 @@ pub use bedrock::operation::converse_stream::ConverseStreamInput as BedrockStrea
pub use bedrock::types::{
ContentBlock as BedrockRequestContent, ConversationRole as BedrockRole,
ConverseOutput as BedrockResponse, ConverseStreamOutput as BedrockStreamingResponse,
- ImageBlock as BedrockImageBlock, Message as BedrockMessage,
+ ImageBlock as BedrockImageBlock, ImageFormat as BedrockImageFormat,
+ ImageSource as BedrockImageSource, Message as BedrockMessage,
ReasoningContentBlock as BedrockThinkingBlock, ReasoningTextBlock as BedrockThinkingTextBlock,
ResponseStream as BedrockResponseStream, SystemContentBlock as BedrockSystemContentBlock,
ToolResultBlock as BedrockToolResultBlock,
@@ -31,6 +32,8 @@ use thiserror::Error;
pub use crate::models::*;
+pub const CONTEXT_1M_BETA_HEADER: &str = "context-1m-2025-08-07";
+
pub async fn stream_completion(
client: bedrock::Client,
request: Request,
@@ -39,6 +42,8 @@ pub async fn stream_completion(
.model_id(request.model.clone())
.set_messages(request.messages.into());
+ let mut additional_fields: HashMap<String, Document> = HashMap::new();
+
match request.thinking {
Some(Thinking::Enabled {
budget_tokens: Some(budget_tokens),
@@ -50,24 +55,27 @@ pub async fn stream_completion(
Document::Number(AwsNumber::PosInt(budget_tokens)),
),
]);
- response =
- response.additional_model_request_fields(Document::Object(HashMap::from([(
- "thinking".to_string(),
- Document::from(thinking_config),
- )])));
+ additional_fields.insert("thinking".to_string(), Document::from(thinking_config));
}
Some(Thinking::Adaptive { effort: _ }) => {
let thinking_config =
HashMap::from([("type".to_string(), Document::String("adaptive".to_string()))]);
- response =
- response.additional_model_request_fields(Document::Object(HashMap::from([(
- "thinking".to_string(),
- Document::from(thinking_config),
- )])));
+ additional_fields.insert("thinking".to_string(), Document::from(thinking_config));
}
_ => {}
}
+ if request.allow_extended_context {
+ additional_fields.insert(
+ "anthropic_beta".to_string(),
+ Document::Array(vec![Document::String(CONTEXT_1M_BETA_HEADER.to_string())]),
+ );
+ }
+
+ if !additional_fields.is_empty() {
+ response = response.additional_model_request_fields(Document::Object(additional_fields));
+ }
+
if request.tools.as_ref().is_some_and(|t| !t.tools.is_empty()) {
response = response.set_tool_config(request.tools);
}
@@ -178,6 +186,7 @@ pub struct Request {
pub temperature: Option<f32>,
pub top_k: Option<u32>,
pub top_p: Option<f32>,
+ pub allow_extended_context: bool,
}
#[derive(Debug, Serialize, Deserialize)]
@@ -551,6 +551,46 @@ impl Model {
}
}
+ pub fn supports_images(&self) -> bool {
+ match self {
+ // Anthropic Claude 3+ models (all support vision)
+ Self::Claude3Opus
+ | Self::Claude3Sonnet
+ | Self::Claude3_5Sonnet
+ | Self::Claude3_5SonnetV2
+ | Self::Claude3_7Sonnet
+ | Self::Claude3_7SonnetThinking
+ | Self::ClaudeOpus4
+ | Self::ClaudeOpus4Thinking
+ | Self::ClaudeOpus4_1
+ | Self::ClaudeOpus4_1Thinking
+ | Self::ClaudeOpus4_5
+ | Self::ClaudeOpus4_5Thinking
+ | Self::ClaudeSonnet4
+ | Self::ClaudeSonnet4Thinking
+ | Self::ClaudeSonnet4_5
+ | Self::ClaudeSonnet4_5Thinking
+ | Self::Claude3_5Haiku
+ | Self::ClaudeHaiku4_5
+ | Self::Claude3Haiku => true,
+
+ // Amazon Nova visual models
+ Self::AmazonNovaPro | Self::AmazonNovaLite => true,
+
+ // Meta Llama 3.2 Vision models
+ Self::MetaLlama3211BInstructV1 | Self::MetaLlama3290BInstructV1 => true,
+
+ // Mistral Pixtral (visual model)
+ Self::MistralPixtralLarge2502V1 => true,
+
+ // Custom models default to no image support
+ Self::Custom { .. } => false,
+
+ // All other models don't support images
+ _ => false,
+ }
+ }
+
pub fn supports_caching(&self) -> bool {
match self {
// Only Claude models on Bedrock support caching
@@ -638,6 +678,20 @@ impl Model {
}
}
+ pub fn supports_extended_context(&self) -> bool {
+ matches!(
+ self,
+ Model::ClaudeSonnet4
+ | Model::ClaudeSonnet4Thinking
+ | Model::ClaudeSonnet4_5
+ | Model::ClaudeSonnet4_5Thinking
+ | Model::ClaudeOpus4_5
+ | Model::ClaudeOpus4_5Thinking
+ | Model::ClaudeOpus4_6
+ | Model::ClaudeOpus4_6Thinking
+ )
+ }
+
pub fn cross_region_inference_id(
&self,
region: &str,
@@ -223,7 +223,7 @@ impl Model {
}
pub fn max_token_count(&self) -> u64 {
- self.capabilities.limits.max_prompt_tokens
+ self.capabilities.limits.max_context_window_tokens as u64
}
pub fn supports_tools(&self) -> bool {
@@ -1038,6 +1038,61 @@ mod tests {
assert_eq!(schema.data[0].vendor, ModelVendor::Unknown);
}
+ #[test]
+ fn test_max_token_count_returns_context_window_not_prompt_tokens() {
+ let json = r#"{
+ "data": [
+ {
+ "billing": { "is_premium": true, "multiplier": 1 },
+ "capabilities": {
+ "family": "claude-sonnet-4",
+ "limits": { "max_context_window_tokens": 200000, "max_output_tokens": 16384, "max_prompt_tokens": 90000 },
+ "object": "model_capabilities",
+ "supports": { "streaming": true, "tool_calls": true },
+ "type": "chat"
+ },
+ "id": "claude-sonnet-4",
+ "is_chat_default": false,
+ "is_chat_fallback": false,
+ "model_picker_enabled": true,
+ "name": "Claude Sonnet 4",
+ "object": "model",
+ "preview": false,
+ "vendor": "Anthropic",
+ "version": "claude-sonnet-4"
+ },
+ {
+ "billing": { "is_premium": false, "multiplier": 1 },
+ "capabilities": {
+ "family": "gpt-4o",
+ "limits": { "max_context_window_tokens": 128000, "max_output_tokens": 16384, "max_prompt_tokens": 110000 },
+ "object": "model_capabilities",
+ "supports": { "streaming": true, "tool_calls": true },
+ "type": "chat"
+ },
+ "id": "gpt-4o",
+ "is_chat_default": true,
+ "is_chat_fallback": false,
+ "model_picker_enabled": true,
+ "name": "GPT-4o",
+ "object": "model",
+ "preview": false,
+ "vendor": "Azure OpenAI",
+ "version": "gpt-4o"
+ }
+ ],
+ "object": "list"
+ }"#;
+
+ let schema: ModelSchema = serde_json::from_str(json).unwrap();
+
+ // max_token_count() should return context window (200000), not prompt tokens (90000)
+ assert_eq!(schema.data[0].max_token_count(), 200000);
+
+ // GPT-4o should return 128000 (context window), not 110000 (prompt tokens)
+ assert_eq!(schema.data[1].max_token_count(), 128000);
+ }
+
#[test]
fn test_models_with_pending_policy_deserialize() {
// This test verifies that models with policy states other than "enabled"
@@ -28,7 +28,7 @@ const CONNECTION_INITIALIZE_QUERY: &str = sql!(
const DB_INITIALIZE_QUERY: &str = sql!(
PRAGMA journal_mode=WAL;
- PRAGMA busy_timeout=1;
+ PRAGMA busy_timeout=500;
PRAGMA case_sensitive_like=TRUE;
PRAGMA synchronous=NORMAL;
);
@@ -15,8 +15,6 @@ use project::{Project, WorktreeId};
use std::{collections::hash_map, fmt::Write as _, ops::Range, path::Path, sync::Arc};
use text::{BufferSnapshot as TextBufferSnapshot, Point, ToOffset as _};
-pub(crate) const ZETA2_TESTING_RATE_PER_10K_PREDICTION: u16 = 500;
-
pub fn capture_example(
project: Entity<Project>,
buffer: Entity<Buffer>,
@@ -156,6 +154,7 @@ pub fn capture_example(
excerpt_start_row: Some(0),
events: captured_events,
related_files: captured_related_files,
+ in_open_source_repo: false,
}
});
@@ -304,10 +303,6 @@ fn generate_timestamp_name() -> String {
}
}
-pub(crate) fn should_send_testing_zeta2_request() -> bool {
- rand::random::<u16>() % 10_000 < ZETA2_TESTING_RATE_PER_10K_PREDICTION
-}
-
#[cfg(test)]
mod tests {
use super::*;
@@ -450,9 +445,7 @@ mod tests {
cx.run_until_parked();
// Verify the external edit was recorded in events
- let events = ep_store.update(cx, |store, cx| {
- store.edit_history_for_project_with_pause_split_last_event(&project, cx)
- });
+ let events = ep_store.update(cx, |store, cx| store.edit_history_for_project(&project, cx));
assert!(
matches!(
events
@@ -1,5 +1,81 @@
use language::{BufferSnapshot, Point};
use std::ops::Range;
+use zeta_prompt::ExcerptRanges;
+
+/// Pre-computed Point ranges for all editable/context budget combinations.
+pub struct ExcerptRangePoints {
+ pub editable_150: Range<Point>,
+ pub editable_180: Range<Point>,
+ pub editable_350: Range<Point>,
+ pub editable_150_context_350: Range<Point>,
+ pub editable_180_context_350: Range<Point>,
+ pub editable_350_context_150: Range<Point>,
+}
+
+/// Computes all range variants for a cursor position: editable ranges at 150, 180, and 350
+/// token budgets, plus their corresponding context expansions. Returns the full excerpt range
+/// (union of all context ranges) and the individual sub-ranges as Points.
+pub fn compute_excerpt_ranges(
+ position: Point,
+ snapshot: &BufferSnapshot,
+) -> (Range<Point>, ExcerptRangePoints) {
+ let editable_150 = compute_editable_range(snapshot, position, 150);
+ let editable_180 = compute_editable_range(snapshot, position, 180);
+ let editable_350 = compute_editable_range(snapshot, position, 350);
+
+ let editable_150_context_350 =
+ expand_context_syntactically_then_linewise(snapshot, editable_150.clone(), 350);
+ let editable_180_context_350 =
+ expand_context_syntactically_then_linewise(snapshot, editable_180.clone(), 350);
+ let editable_350_context_150 =
+ expand_context_syntactically_then_linewise(snapshot, editable_350.clone(), 150);
+
+ let full_start_row = editable_150_context_350
+ .start
+ .row
+ .min(editable_180_context_350.start.row)
+ .min(editable_350_context_150.start.row);
+ let full_end_row = editable_150_context_350
+ .end
+ .row
+ .max(editable_180_context_350.end.row)
+ .max(editable_350_context_150.end.row);
+
+ let full_context =
+ Point::new(full_start_row, 0)..Point::new(full_end_row, snapshot.line_len(full_end_row));
+
+ let ranges = ExcerptRangePoints {
+ editable_150,
+ editable_180,
+ editable_350,
+ editable_150_context_350,
+ editable_180_context_350,
+ editable_350_context_150,
+ };
+
+ (full_context, ranges)
+}
+
+/// Converts `ExcerptRangePoints` to byte-offset `ExcerptRanges` relative to `excerpt_start`.
+pub fn excerpt_ranges_to_byte_offsets(
+ ranges: &ExcerptRangePoints,
+ excerpt_start: usize,
+ snapshot: &BufferSnapshot,
+) -> ExcerptRanges {
+ let to_offset = |range: &Range<Point>| -> Range<usize> {
+ let start = range.start.to_offset(snapshot);
+ let end = range.end.to_offset(snapshot);
+ (start - excerpt_start)..(end - excerpt_start)
+ };
+ ExcerptRanges {
+ editable_150: to_offset(&ranges.editable_150),
+ editable_180: to_offset(&ranges.editable_180),
+ editable_350: to_offset(&ranges.editable_350),
+ editable_150_context_350: to_offset(&ranges.editable_150_context_350),
+ editable_180_context_350: to_offset(&ranges.editable_180_context_350),
+ editable_350_context_150: to_offset(&ranges.editable_350_context_150),
+ }
+}
pub fn editable_and_context_ranges_for_cursor_position(
position: Point,
@@ -312,6 +388,8 @@ fn expand_context_syntactically_then_linewise(
start..end
}
+use language::ToOffset as _;
+
#[cfg(test)]
mod tests {
use super::*;
@@ -72,7 +72,6 @@ pub mod zeta2;
#[cfg(test)]
mod edit_prediction_tests;
-use crate::capture_example::should_send_testing_zeta2_request;
use crate::license_detection::LicenseDetectionWatcher;
use crate::mercury::Mercury;
use crate::ollama::Ollama;
@@ -231,6 +230,71 @@ pub enum UserActionType {
pub struct StoredEvent {
pub event: Arc<zeta_prompt::Event>,
pub old_snapshot: TextBufferSnapshot,
+ pub edit_range: Range<Anchor>,
+}
+
+impl StoredEvent {
+ fn can_merge(
+ &self,
+ next_old_event: &&&StoredEvent,
+ new_snapshot: &TextBufferSnapshot,
+ last_edit_range: &Range<Anchor>,
+ ) -> bool {
+ // Events must be for the same buffer
+ if self.old_snapshot.remote_id() != next_old_event.old_snapshot.remote_id() {
+ return false;
+ }
+
+ let a_is_predicted = matches!(
+ self.event.as_ref(),
+ zeta_prompt::Event::BufferChange {
+ predicted: true,
+ ..
+ }
+ );
+ let b_is_predicted = matches!(
+ next_old_event.event.as_ref(),
+ zeta_prompt::Event::BufferChange {
+ predicted: true,
+ ..
+ }
+ );
+
+ // If events come from the same source (both predicted or both manual) then
+ // we would have coalesced them already.
+ if a_is_predicted == b_is_predicted {
+ return false;
+ }
+
+ let left_range = self.edit_range.to_point(new_snapshot);
+ let right_range = next_old_event.edit_range.to_point(new_snapshot);
+ let latest_range = last_edit_range.to_point(&new_snapshot);
+
+ // Events near to the latest edit are not merged if their sources differ.
+ if lines_between_ranges(&left_range, &latest_range)
+ .min(lines_between_ranges(&right_range, &latest_range))
+ <= CHANGE_GROUPING_LINE_SPAN
+ {
+ return false;
+ }
+
+ // Events that are distant from each other are not merged.
+ if lines_between_ranges(&left_range, &right_range) > CHANGE_GROUPING_LINE_SPAN {
+ return false;
+ }
+
+ true
+ }
+}
+
+fn lines_between_ranges(left: &Range<Point>, right: &Range<Point>) -> u32 {
+ if left.start > right.end {
+ return left.start.row - right.end.row;
+ }
+ if right.start > left.end {
+ return right.start.row - left.end.row;
+ }
+ 0
}
struct ProjectState {
@@ -260,18 +324,6 @@ impl ProjectState {
}
pub fn events(&self, cx: &App) -> Vec<StoredEvent> {
- self.events
- .iter()
- .cloned()
- .chain(
- self.last_event
- .as_ref()
- .and_then(|event| event.finalize(&self.license_detection_watchers, cx)),
- )
- .collect()
- }
-
- pub fn events_split_by_pause(&self, cx: &App) -> Vec<StoredEvent> {
self.events
.iter()
.cloned()
@@ -430,6 +482,7 @@ struct LastEvent {
old_file: Option<Arc<dyn File>>,
new_file: Option<Arc<dyn File>>,
edit_range: Option<Range<Anchor>>,
+ predicted: bool,
snapshot_after_last_editing_pause: Option<TextBufferSnapshot>,
last_edit_time: Option<Instant>,
}
@@ -454,7 +507,8 @@ impl LastEvent {
})
});
- let diff = compute_diff_between_snapshots(&self.old_snapshot, &self.new_snapshot)?;
+ let (diff, edit_range) =
+ compute_diff_between_snapshots(&self.old_snapshot, &self.new_snapshot)?;
if path == old_path && diff.is_empty() {
None
@@ -465,9 +519,10 @@ impl LastEvent {
path,
diff,
in_open_source_repo,
- // TODO: Actually detect if this edit was predicted or not
- predicted: false,
+ predicted: self.predicted,
}),
+ edit_range: self.new_snapshot.anchor_before(edit_range.start)
+ ..self.new_snapshot.anchor_before(edit_range.end),
old_snapshot: self.old_snapshot.clone(),
})
}
@@ -484,6 +539,7 @@ impl LastEvent {
old_file: self.old_file.clone(),
new_file: self.new_file.clone(),
edit_range: None,
+ predicted: self.predicted,
snapshot_after_last_editing_pause: None,
last_edit_time: self.last_edit_time,
};
@@ -494,6 +550,7 @@ impl LastEvent {
old_file: self.old_file.clone(),
new_file: self.new_file.clone(),
edit_range: None,
+ predicted: self.predicted,
snapshot_after_last_editing_pause: None,
last_edit_time: self.last_edit_time,
};
@@ -505,7 +562,7 @@ impl LastEvent {
pub(crate) fn compute_diff_between_snapshots(
old_snapshot: &TextBufferSnapshot,
new_snapshot: &TextBufferSnapshot,
-) -> Option<String> {
+) -> Option<(String, Range<Point>)> {
let edits: Vec<Edit<usize>> = new_snapshot
.edits_since::<usize>(&old_snapshot.version)
.collect();
@@ -545,7 +602,7 @@ pub(crate) fn compute_diff_between_snapshots(
new_context_start_row,
);
- Some(diff)
+ Some((diff, new_start_point..new_end_point))
}
fn buffer_path_with_id_fallback(
@@ -716,17 +773,6 @@ impl EditPredictionStore {
.unwrap_or_default()
}
- pub fn edit_history_for_project_with_pause_split_last_event(
- &self,
- project: &Entity<Project>,
- cx: &App,
- ) -> Vec<StoredEvent> {
- self.projects
- .get(&project.entity_id())
- .map(|project_state| project_state.events_split_by_pause(cx))
- .unwrap_or_default()
- }
-
pub fn context_for_project<'a>(
&'a self,
project: &Entity<Project>,
@@ -734,10 +780,19 @@ impl EditPredictionStore {
) -> Vec<RelatedFile> {
self.projects
.get(&project.entity_id())
- .map(|project| {
- project
- .context
- .update(cx, |context, cx| context.related_files(cx))
+ .map(|project_state| {
+ project_state.context.update(cx, |context, cx| {
+ context
+ .related_files_with_buffers(cx)
+ .map(|(mut related_file, buffer)| {
+ related_file.in_open_source_repo = buffer
+ .read(cx)
+ .file()
+ .map_or(false, |file| self.is_file_open_source(&project, file, cx));
+ related_file
+ })
+ .collect()
+ })
})
.unwrap_or_default()
}
@@ -785,9 +840,9 @@ impl EditPredictionStore {
self.projects
.get(&project.entity_id())
.map(|project| {
- project
- .context
- .update(cx, |context, cx| context.related_files_with_buffers(cx))
+ project.context.update(cx, |context, cx| {
+ context.related_files_with_buffers(cx).collect()
+ })
})
.unwrap_or_default()
}
@@ -1011,7 +1066,7 @@ impl EditPredictionStore {
if let language::BufferEvent::Edited = event
&& let Some(project) = project.upgrade()
{
- this.report_changes_for_buffer(&buffer, &project, cx);
+ this.report_changes_for_buffer(&buffer, &project, false, cx);
}
}
}),
@@ -1032,6 +1087,7 @@ impl EditPredictionStore {
&mut self,
buffer: &Entity<Buffer>,
project: &Entity<Project>,
+ is_predicted: bool,
cx: &mut Context<Self>,
) {
let project_state = self.get_or_init_project(project, cx);
@@ -1065,30 +1121,32 @@ impl EditPredictionStore {
last_offset = Some(edit.new.end);
}
- if num_edits > 0 {
- let action_type = match (total_deleted, total_inserted, num_edits) {
- (0, ins, n) if ins == n => UserActionType::InsertChar,
- (0, _, _) => UserActionType::InsertSelection,
- (del, 0, n) if del == n => UserActionType::DeleteChar,
- (_, 0, _) => UserActionType::DeleteSelection,
- (_, ins, n) if ins == n => UserActionType::InsertChar,
- (_, _, _) => UserActionType::InsertSelection,
- };
+ let Some(edit_range) = edit_range else {
+ return;
+ };
- if let Some(offset) = last_offset {
- let point = new_snapshot.offset_to_point(offset);
- let timestamp_epoch_ms = SystemTime::now()
- .duration_since(UNIX_EPOCH)
- .map(|d| d.as_millis() as u64)
- .unwrap_or(0);
- project_state.record_user_action(UserActionRecord {
- action_type,
- buffer_id: buffer.entity_id(),
- line_number: point.row,
- offset,
- timestamp_epoch_ms,
- });
- }
+ let action_type = match (total_deleted, total_inserted, num_edits) {
+ (0, ins, n) if ins == n => UserActionType::InsertChar,
+ (0, _, _) => UserActionType::InsertSelection,
+ (del, 0, n) if del == n => UserActionType::DeleteChar,
+ (_, 0, _) => UserActionType::DeleteSelection,
+ (_, ins, n) if ins == n => UserActionType::InsertChar,
+ (_, _, _) => UserActionType::InsertSelection,
+ };
+
+ if let Some(offset) = last_offset {
+ let point = new_snapshot.offset_to_point(offset);
+ let timestamp_epoch_ms = SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .map(|d| d.as_millis() as u64)
+ .unwrap_or(0);
+ project_state.record_user_action(UserActionRecord {
+ action_type,
+ buffer_id: buffer.entity_id(),
+ line_number: point.row,
+ offset,
+ timestamp_epoch_ms,
+ });
}
let events = &mut project_state.events;
@@ -1099,20 +1157,18 @@ impl EditPredictionStore {
== last_event.new_snapshot.remote_id()
&& old_snapshot.version == last_event.new_snapshot.version;
+ let prediction_source_changed = is_predicted != last_event.predicted;
+
let should_coalesce = is_next_snapshot_of_same_buffer
- && edit_range
+ && !prediction_source_changed
+ && last_event
+ .edit_range
.as_ref()
- .zip(last_event.edit_range.as_ref())
- .is_some_and(|(a, b)| {
- let a = a.to_point(&new_snapshot);
- let b = b.to_point(&new_snapshot);
- if a.start > b.end {
- a.start.row.abs_diff(b.end.row) <= CHANGE_GROUPING_LINE_SPAN
- } else if b.start > a.end {
- b.start.row.abs_diff(a.end.row) <= CHANGE_GROUPING_LINE_SPAN
- } else {
- true
- }
+ .is_some_and(|last_edit_range| {
+ lines_between_ranges(
+ &edit_range.to_point(&new_snapshot),
+ &last_edit_range.to_point(&new_snapshot),
+ ) <= CHANGE_GROUPING_LINE_SPAN
});
if should_coalesce {
@@ -1125,7 +1181,7 @@ impl EditPredictionStore {
Some(last_event.new_snapshot.clone());
}
- last_event.edit_range = edit_range;
+ last_event.edit_range = Some(edit_range);
last_event.new_snapshot = new_snapshot;
last_event.last_edit_time = Some(now);
return;
@@ -1141,12 +1197,15 @@ impl EditPredictionStore {
}
}
+ merge_trailing_events_if_needed(events, &old_snapshot, &new_snapshot, &edit_range);
+
project_state.last_event = Some(LastEvent {
old_file,
new_file,
old_snapshot,
new_snapshot,
- edit_range,
+ edit_range: Some(edit_range),
+ predicted: is_predicted,
snapshot_after_last_editing_pause: None,
last_edit_time: Some(now),
});
@@ -1193,11 +1252,18 @@ impl EditPredictionStore {
}
fn accept_current_prediction(&mut self, project: &Entity<Project>, cx: &mut Context<Self>) {
- let Some(project_state) = self.projects.get_mut(&project.entity_id()) else {
+ let Some(current_prediction) = self
+ .projects
+ .get_mut(&project.entity_id())
+ .and_then(|project_state| project_state.current_prediction.take())
+ else {
return;
};
- let Some(current_prediction) = project_state.current_prediction.take() else {
+ self.report_changes_for_buffer(¤t_prediction.prediction.buffer, project, true, cx);
+
+ // can't hold &mut project_state ref across report_changes_for_buffer_call
+ let Some(project_state) = self.projects.get_mut(&project.entity_id()) else {
return;
};
@@ -1719,7 +1785,7 @@ impl EditPredictionStore {
self.get_or_init_project(&project, cx);
let project_state = self.projects.get(&project.entity_id()).unwrap();
- let stored_events = project_state.events_split_by_pause(cx);
+ let stored_events = project_state.events(cx);
let has_events = !stored_events.is_empty();
let events: Vec<Arc<zeta_prompt::Event>> =
stored_events.into_iter().map(|e| e.event).collect();
@@ -1771,15 +1837,18 @@ impl EditPredictionStore {
};
let task = match &self.edit_prediction_model {
- EditPredictionModel::Zeta1 => {
- if should_send_testing_zeta2_request() {
- let mut zeta2_inputs = inputs.clone();
- zeta2_inputs.trigger = PredictEditsRequestTrigger::Testing;
- zeta2::request_prediction_with_zeta2(self, zeta2_inputs, cx).detach();
- }
- zeta1::request_prediction_with_zeta1(self, inputs, cx)
- }
- EditPredictionModel::Zeta2 => zeta2::request_prediction_with_zeta2(self, inputs, cx),
+ EditPredictionModel::Zeta1 => zeta2::request_prediction_with_zeta2(
+ self,
+ inputs,
+ Some(zeta_prompt::EditPredictionModelKind::Zeta1),
+ cx,
+ ),
+ EditPredictionModel::Zeta2 => zeta2::request_prediction_with_zeta2(
+ self,
+ inputs,
+ Some(zeta_prompt::EditPredictionModelKind::Zeta2),
+ cx,
+ ),
EditPredictionModel::Sweep => self.sweep_ai.request_prediction_with_sweep(inputs, cx),
EditPredictionModel::Mercury => self.mercury.request_prediction(inputs, cx),
EditPredictionModel::Ollama => self.ollama.request_prediction(inputs, cx),
@@ -2136,25 +2205,6 @@ impl EditPredictionStore {
.is_some_and(|watcher| watcher.is_project_open_source())
}
- fn can_collect_file(&self, project: &Entity<Project>, file: &Arc<dyn File>, cx: &App) -> bool {
- self.data_collection_choice.is_enabled(cx) && self.is_file_open_source(project, file, cx)
- }
-
- fn can_collect_events(&self, events: &[Arc<zeta_prompt::Event>], cx: &App) -> bool {
- if !self.data_collection_choice.is_enabled(cx) {
- return false;
- }
- events.iter().all(|event| {
- matches!(
- event.as_ref(),
- zeta_prompt::Event::BufferChange {
- in_open_source_repo: true,
- ..
- }
- )
- })
- }
-
fn load_data_collection_choice() -> DataCollectionChoice {
let choice = KEY_VALUE_STORE
.read_kvp(ZED_PREDICT_DATA_COLLECTION_CHOICE)
@@ -2219,6 +2269,67 @@ impl EditPredictionStore {
}
}
+fn merge_trailing_events_if_needed(
+ events: &mut VecDeque<StoredEvent>,
+ end_snapshot: &TextBufferSnapshot,
+ latest_snapshot: &TextBufferSnapshot,
+ latest_edit_range: &Range<Anchor>,
+) {
+ let mut next_old_event = None;
+ let mut mergeable_count = 0;
+ for old_event in events.iter().rev() {
+ if let Some(next_old_event) = &next_old_event
+ && !old_event.can_merge(&next_old_event, latest_snapshot, latest_edit_range)
+ {
+ break;
+ }
+ mergeable_count += 1;
+ next_old_event = Some(old_event);
+ }
+
+ if mergeable_count <= 1 {
+ return;
+ }
+
+ let mut events_to_merge = events.range(events.len() - mergeable_count..).peekable();
+ let oldest_event = events_to_merge.peek().unwrap();
+ let oldest_snapshot = oldest_event.old_snapshot.clone();
+
+ if let Some((diff, edited_range)) =
+ compute_diff_between_snapshots(&oldest_snapshot, end_snapshot)
+ {
+ let merged_event = match oldest_event.event.as_ref() {
+ zeta_prompt::Event::BufferChange {
+ old_path,
+ path,
+ in_open_source_repo,
+ ..
+ } => StoredEvent {
+ event: Arc::new(zeta_prompt::Event::BufferChange {
+ old_path: old_path.clone(),
+ path: path.clone(),
+ diff,
+ in_open_source_repo: *in_open_source_repo,
+ predicted: events_to_merge.all(|e| {
+ matches!(
+ e.event.as_ref(),
+ zeta_prompt::Event::BufferChange {
+ predicted: true,
+ ..
+ }
+ )
+ }),
+ }),
+ old_snapshot: oldest_snapshot.clone(),
+ edit_range: end_snapshot.anchor_before(edited_range.start)
+ ..end_snapshot.anchor_before(edited_range.end),
+ },
+ };
+ events.truncate(events.len() - mergeable_count);
+ events.push_back(merged_event);
+ }
+}
+
pub(crate) fn filter_redundant_excerpts(
mut related_files: Vec<RelatedFile>,
cursor_path: &Path,
@@ -1,11 +1,10 @@
use super::*;
-use crate::{compute_diff_between_snapshots, udiff::apply_diff_to_string, zeta1::MAX_EVENT_TOKENS};
+use crate::{compute_diff_between_snapshots, udiff::apply_diff_to_string};
use client::{UserStore, test::FakeServer};
-use clock::{FakeSystemClock, ReplicaId};
+use clock::FakeSystemClock;
use cloud_api_types::{CreateLlmTokenResponse, LlmToken};
use cloud_llm_client::{
- EditPredictionRejectReason, EditPredictionRejection, PredictEditsBody, PredictEditsResponse,
- RejectEditPredictionsBody,
+ EditPredictionRejectReason, EditPredictionRejection, RejectEditPredictionsBody,
predict_edits_v3::{PredictEditsV3Request, PredictEditsV3Response},
};
use futures::{
@@ -26,7 +25,7 @@ use project::{FakeFs, Project};
use serde_json::json;
use settings::SettingsStore;
use std::{path::Path, sync::Arc, time::Duration};
-use util::{path, rel_path::rel_path};
+use util::path;
use uuid::Uuid;
use zeta_prompt::ZetaPromptInput;
@@ -356,26 +355,9 @@ async fn test_edit_history_getter_pause_splits_last_event(cx: &mut TestAppContex
buffer.edit(vec![(19..19, "!")], None, cx);
});
- // Without time-based splitting, there is one event.
- let events = ep_store.update(cx, |ep_store, cx| {
- ep_store.edit_history_for_project(&project, cx)
- });
- assert_eq!(events.len(), 1);
- let zeta_prompt::Event::BufferChange { diff, .. } = events[0].event.as_ref();
- assert_eq!(
- diff.as_str(),
- indoc! {"
- @@ -1,3 +1,3 @@
- Hello!
- -
- +How are you?!
- Bye
- "}
- );
-
// With time-based splitting, there are two distinct events.
let events = ep_store.update(cx, |ep_store, cx| {
- ep_store.edit_history_for_project_with_pause_split_last_event(&project, cx)
+ ep_store.edit_history_for_project(&project, cx)
});
assert_eq!(events.len(), 2);
let zeta_prompt::Event::BufferChange { diff, .. } = events[0].event.as_ref();
@@ -404,7 +386,7 @@ async fn test_edit_history_getter_pause_splits_last_event(cx: &mut TestAppContex
}
#[gpui::test]
-async fn test_event_grouping_line_span_coalescing(cx: &mut TestAppContext) {
+async fn test_predicted_edits_are_separated_in_edit_history(cx: &mut TestAppContext) {
let (ep_store, _requests) = init_test_with_fake_client(cx);
let fs = FakeFs::new(cx.executor());
@@ -593,6 +575,278 @@ fn render_events(events: &[StoredEvent]) -> String {
.join("\n---\n")
}
+fn render_events_with_predicted(events: &[StoredEvent]) -> Vec<String> {
+ events
+ .iter()
+ .map(|e| {
+ let zeta_prompt::Event::BufferChange {
+ diff, predicted, ..
+ } = e.event.as_ref();
+ let prefix = if *predicted { "predicted" } else { "manual" };
+ format!("{}\n{}", prefix, diff)
+ })
+ .collect()
+}
+
+#[gpui::test]
+async fn test_predicted_flag_coalescing(cx: &mut TestAppContext) {
+ let (ep_store, _requests) = init_test_with_fake_client(cx);
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(
+ "/root",
+ json!({
+ "foo.rs": "line 0\nline 1\nline 2\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline 9\nline 10\nline 11\nline 12\nline 13\nline 14\n"
+ }),
+ )
+ .await;
+ let project = Project::test(fs, vec![path!("/root").as_ref()], cx).await;
+
+ let buffer = project
+ .update(cx, |project, cx| {
+ let path = project.find_project_path(path!("root/foo.rs"), cx).unwrap();
+ project.open_buffer(path, cx)
+ })
+ .await
+ .unwrap();
+
+ ep_store.update(cx, |ep_store, cx| {
+ ep_store.register_buffer(&buffer, &project, cx);
+ });
+
+ // Case 1: Manual edits have `predicted` set to false.
+ buffer.update(cx, |buffer, cx| {
+ buffer.edit(vec![(0..6, "LINE ZERO")], None, cx);
+ });
+
+ let events = ep_store.update(cx, |ep_store, cx| {
+ ep_store.edit_history_for_project(&project, cx)
+ });
+
+ assert_eq!(
+ render_events_with_predicted(&events),
+ vec![indoc! {"
+ manual
+ @@ -1,4 +1,4 @@
+ -line 0
+ +LINE ZERO
+ line 1
+ line 2
+ line 3
+ "}]
+ );
+
+ // Case 2: Multiple successive manual edits near each other are merged into one
+ // event with `predicted` set to false.
+ buffer.update(cx, |buffer, cx| {
+ let offset = Point::new(1, 0).to_offset(buffer);
+ let end = Point::new(1, 6).to_offset(buffer);
+ buffer.edit(vec![(offset..end, "LINE ONE")], None, cx);
+ });
+
+ let events = ep_store.update(cx, |ep_store, cx| {
+ ep_store.edit_history_for_project(&project, cx)
+ });
+ assert_eq!(
+ render_events_with_predicted(&events),
+ vec![indoc! {"
+ manual
+ @@ -1,5 +1,5 @@
+ -line 0
+ -line 1
+ +LINE ZERO
+ +LINE ONE
+ line 2
+ line 3
+ line 4
+ "}]
+ );
+
+ // Case 3: Accepted predictions have `predicted` set to true.
+ // Case 5: A manual edit that follows a predicted edit is not merged with the
+ // predicted edit, even if it is nearby.
+ ep_store.update(cx, |ep_store, cx| {
+ buffer.update(cx, |buffer, cx| {
+ let offset = Point::new(2, 0).to_offset(buffer);
+ let end = Point::new(2, 6).to_offset(buffer);
+ buffer.edit(vec![(offset..end, "LINE TWO")], None, cx);
+ });
+ ep_store.report_changes_for_buffer(&buffer, &project, true, cx);
+ });
+
+ let events = ep_store.update(cx, |ep_store, cx| {
+ ep_store.edit_history_for_project(&project, cx)
+ });
+ assert_eq!(
+ render_events_with_predicted(&events),
+ vec![
+ indoc! {"
+ manual
+ @@ -1,5 +1,5 @@
+ -line 0
+ -line 1
+ +LINE ZERO
+ +LINE ONE
+ line 2
+ line 3
+ line 4
+ "},
+ indoc! {"
+ predicted
+ @@ -1,6 +1,6 @@
+ LINE ZERO
+ LINE ONE
+ -line 2
+ +LINE TWO
+ line 3
+ line 4
+ line 5
+ "}
+ ]
+ );
+
+ // Case 4: Multiple successive accepted predictions near each other are merged
+ // into one event with `predicted` set to true.
+ ep_store.update(cx, |ep_store, cx| {
+ buffer.update(cx, |buffer, cx| {
+ let offset = Point::new(3, 0).to_offset(buffer);
+ let end = Point::new(3, 6).to_offset(buffer);
+ buffer.edit(vec![(offset..end, "LINE THREE")], None, cx);
+ });
+ ep_store.report_changes_for_buffer(&buffer, &project, true, cx);
+ });
+
+ let events = ep_store.update(cx, |ep_store, cx| {
+ ep_store.edit_history_for_project(&project, cx)
+ });
+ assert_eq!(
+ render_events_with_predicted(&events),
+ vec![
+ indoc! {"
+ manual
+ @@ -1,5 +1,5 @@
+ -line 0
+ -line 1
+ +LINE ZERO
+ +LINE ONE
+ line 2
+ line 3
+ line 4
+ "},
+ indoc! {"
+ predicted
+ @@ -1,7 +1,7 @@
+ LINE ZERO
+ LINE ONE
+ -line 2
+ -line 3
+ +LINE TWO
+ +LINE THREE
+ line 4
+ line 5
+ line 6
+ "}
+ ]
+ );
+
+ // Case 5 (continued): A manual edit that follows a predicted edit is not merged
+ // with the predicted edit, even if it is nearby.
+ buffer.update(cx, |buffer, cx| {
+ let offset = Point::new(4, 0).to_offset(buffer);
+ let end = Point::new(4, 6).to_offset(buffer);
+ buffer.edit(vec![(offset..end, "LINE FOUR")], None, cx);
+ });
+
+ let events = ep_store.update(cx, |ep_store, cx| {
+ ep_store.edit_history_for_project(&project, cx)
+ });
+ assert_eq!(
+ render_events_with_predicted(&events),
+ vec![
+ indoc! {"
+ manual
+ @@ -1,5 +1,5 @@
+ -line 0
+ -line 1
+ +LINE ZERO
+ +LINE ONE
+ line 2
+ line 3
+ line 4
+ "},
+ indoc! {"
+ predicted
+ @@ -1,7 +1,7 @@
+ LINE ZERO
+ LINE ONE
+ -line 2
+ -line 3
+ +LINE TWO
+ +LINE THREE
+ line 4
+ line 5
+ line 6
+ "},
+ indoc! {"
+ manual
+ @@ -2,7 +2,7 @@
+ LINE ONE
+ LINE TWO
+ LINE THREE
+ -line 4
+ +LINE FOUR
+ line 5
+ line 6
+ line 7
+ "}
+ ]
+ );
+
+ // Case 6: If we then perform a manual edit at a *different* location (more than
+ // 8 lines away), then the edits at the prior location can be merged with each
+ // other, even if some are predicted and some are not. `predicted` means all
+ // constituent edits were predicted.
+ buffer.update(cx, |buffer, cx| {
+ let offset = Point::new(14, 0).to_offset(buffer);
+ let end = Point::new(14, 7).to_offset(buffer);
+ buffer.edit(vec![(offset..end, "LINE FOURTEEN")], None, cx);
+ });
+
+ let events = ep_store.update(cx, |ep_store, cx| {
+ ep_store.edit_history_for_project(&project, cx)
+ });
+ assert_eq!(
+ render_events_with_predicted(&events),
+ vec![
+ indoc! {"
+ manual
+ @@ -1,8 +1,8 @@
+ -line 0
+ -line 1
+ -line 2
+ -line 3
+ -line 4
+ +LINE ZERO
+ +LINE ONE
+ +LINE TWO
+ +LINE THREE
+ +LINE FOUR
+ line 5
+ line 6
+ line 7
+ "},
+ indoc! {"
+ manual
+ @@ -12,4 +12,4 @@
+ line 11
+ line 12
+ line 13
+ -line 14
+ +LINE FOURTEEN
+ "}
+ ]
+ );
+}
+
#[gpui::test]
async fn test_empty_prediction(cx: &mut TestAppContext) {
let (ep_store, mut requests) = init_test_with_fake_client(cx);
@@ -1424,8 +1678,6 @@ fn init_test_with_fake_client(
})
}
-const BSD_0_TXT: &str = include_str!("../license_examples/0bsd.txt");
-
#[gpui::test]
async fn test_edit_prediction_basic_interpolation(cx: &mut TestAppContext) {
let buffer = cx.new(|cx| Buffer::local("Lorem ipsum dolor", cx));
@@ -1452,6 +1704,9 @@ async fn test_edit_prediction_basic_interpolation(cx: &mut TestAppContext) {
editable_range_in_excerpt: 0..0,
cursor_offset_in_excerpt: 0,
excerpt_start_row: None,
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
},
buffer_snapshotted_at: Instant::now(),
response_received_at: Instant::now(),
@@ -1555,13 +1810,10 @@ async fn test_clean_up_diff(cx: &mut TestAppContext) {
}
"},
indoc! {"
- <|editable_region_start|>
fn main() {
let word_1 = \"lorem\";
let range = word_1.len()..word_1.len();
}
-
- <|editable_region_end|>
"},
cx,
)
@@ -1582,12 +1834,9 @@ async fn test_clean_up_diff(cx: &mut TestAppContext) {
}
"},
indoc! {"
- <|editable_region_start|>
fn main() {
let story = \"the quick brown fox jumps over the lazy dog\";
}
-
- <|editable_region_end|>
"},
cx,
)
@@ -1605,18 +1854,11 @@ async fn test_edit_prediction_end_of_buffer(cx: &mut TestAppContext) {
init_test(cx);
let buffer_content = "lorem\n";
- let completion_response = indoc! {"
- ```animals.js
- <|start_of_file|>
- <|editable_region_start|>
- lorem
- ipsum
- <|editable_region_end|>
- ```"};
+ let completion_response = "lorem\nipsum\n";
assert_eq!(
apply_edit_prediction(buffer_content, completion_response, cx).await,
- "lorem\nipsum"
+ "lorem\nipsum\n"
);
}
@@ -1685,298 +1927,6 @@ async fn test_edit_prediction_no_spurious_trailing_newline(cx: &mut TestAppConte
});
}
-#[gpui::test]
-async fn test_can_collect_data(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- fs.insert_tree(path!("/project"), json!({ "LICENSE": BSD_0_TXT }))
- .await;
-
- let project = Project::test(fs.clone(), [path!("/project").as_ref()], cx).await;
- let buffer = project
- .update(cx, |project, cx| {
- project.open_local_buffer(path!("/project/src/main.rs"), cx)
- })
- .await
- .unwrap();
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- true
- );
-
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Disabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-}
-
-#[gpui::test]
-async fn test_no_data_collection_for_remote_file(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- let project = Project::test(fs.clone(), [], cx).await;
-
- let buffer = cx.new(|_cx| {
- Buffer::remote(
- language::BufferId::new(1).unwrap(),
- ReplicaId::new(1),
- language::Capability::ReadWrite,
- "fn main() {\n println!(\"Hello\");\n}",
- )
- });
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-}
-
-#[gpui::test]
-async fn test_no_data_collection_for_private_file(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- fs.insert_tree(
- path!("/project"),
- json!({
- "LICENSE": BSD_0_TXT,
- ".env": "SECRET_KEY=secret"
- }),
- )
- .await;
-
- let project = Project::test(fs.clone(), [path!("/project").as_ref()], cx).await;
- let buffer = project
- .update(cx, |project, cx| {
- project.open_local_buffer("/project/.env", cx)
- })
- .await
- .unwrap();
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-}
-
-#[gpui::test]
-async fn test_no_data_collection_for_untitled_buffer(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- let project = Project::test(fs.clone(), [], cx).await;
- let buffer = cx.new(|cx| Buffer::local("", cx));
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-}
-
-#[gpui::test]
-async fn test_no_data_collection_when_closed_source(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- fs.insert_tree(path!("/project"), json!({ "main.rs": "fn main() {}" }))
- .await;
-
- let project = Project::test(fs.clone(), [path!("/project").as_ref()], cx).await;
- let buffer = project
- .update(cx, |project, cx| {
- project.open_local_buffer("/project/main.rs", cx)
- })
- .await
- .unwrap();
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-}
-
-#[gpui::test]
-async fn test_data_collection_status_changes_on_move(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- fs.insert_tree(
- path!("/open_source_worktree"),
- json!({ "LICENSE": BSD_0_TXT, "main.rs": "" }),
- )
- .await;
- fs.insert_tree(path!("/closed_source_worktree"), json!({ "main.rs": "" }))
- .await;
-
- let project = Project::test(
- fs.clone(),
- [
- path!("/open_source_worktree").as_ref(),
- path!("/closed_source_worktree").as_ref(),
- ],
- cx,
- )
- .await;
- let buffer = project
- .update(cx, |project, cx| {
- project.open_local_buffer(path!("/open_source_worktree/main.rs"), cx)
- })
- .await
- .unwrap();
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- true
- );
-
- let closed_source_file = project
- .update(cx, |project, cx| {
- let worktree2 = project
- .worktree_for_root_name("closed_source_worktree", cx)
- .unwrap();
- worktree2.update(cx, |worktree2, cx| {
- worktree2.load_file(rel_path("main.rs"), cx)
- })
- })
- .await
- .unwrap()
- .file;
-
- buffer.update(cx, |buffer, cx| {
- buffer.file_updated(closed_source_file, cx);
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-}
-
-#[gpui::test]
-async fn test_no_data_collection_for_events_in_uncollectable_buffers(cx: &mut TestAppContext) {
- init_test(cx);
-
- let fs = project::FakeFs::new(cx.executor());
- fs.insert_tree(
- path!("/worktree1"),
- json!({ "LICENSE": BSD_0_TXT, "main.rs": "", "other.rs": "" }),
- )
- .await;
- fs.insert_tree(path!("/worktree2"), json!({ "private.rs": "" }))
- .await;
-
- let project = Project::test(
- fs.clone(),
- [path!("/worktree1").as_ref(), path!("/worktree2").as_ref()],
- cx,
- )
- .await;
- let buffer = project
- .update(cx, |project, cx| {
- project.open_local_buffer(path!("/worktree1/main.rs"), cx)
- })
- .await
- .unwrap();
- let private_buffer = project
- .update(cx, |project, cx| {
- project.open_local_buffer(path!("/worktree2/file.rs"), cx)
- })
- .await
- .unwrap();
-
- let (ep_store, captured_request, _) = make_test_ep_store(&project, cx).await;
- ep_store.update(cx, |ep_store, _cx| {
- ep_store.data_collection_choice = DataCollectionChoice::Enabled
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- true
- );
-
- // this has a side effect of registering the buffer to watch for edits
- run_edit_prediction(&private_buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-
- private_buffer.update(cx, |private_buffer, cx| {
- private_buffer.edit([(0..0, "An edit for the history!")], None, cx);
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- false
- );
-
- // make an edit that uses too many bytes, causing private_buffer edit to not be able to be
- // included
- buffer.update(cx, |buffer, cx| {
- buffer.edit(
- [(
- 0..0,
- " ".repeat(MAX_EVENT_TOKENS * cursor_excerpt::BYTES_PER_TOKEN_GUESS),
- )],
- None,
- cx,
- );
- });
-
- run_edit_prediction(&buffer, &project, &ep_store, cx).await;
- assert_eq!(
- captured_request.lock().clone().unwrap().can_collect_data,
- true
- );
-}
-
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
@@ -1992,7 +1942,7 @@ async fn apply_edit_prediction(
let fs = project::FakeFs::new(cx.executor());
let project = Project::test(fs.clone(), [path!("/project").as_ref()], cx).await;
let buffer = cx.new(|cx| Buffer::local(buffer_content, cx));
- let (ep_store, _, response) = make_test_ep_store(&project, cx).await;
+ let (ep_store, response) = make_test_ep_store(&project, cx).await;
*response.lock() = completion_response.to_string();
let edit_prediction = run_edit_prediction(&buffer, &project, &ep_store, cx).await;
buffer.update(cx, |buffer, cx| {
@@ -2021,28 +1971,13 @@ async fn run_edit_prediction(
async fn make_test_ep_store(
project: &Entity<Project>,
cx: &mut TestAppContext,
-) -> (
- Entity<EditPredictionStore>,
- Arc<Mutex<Option<PredictEditsBody>>>,
- Arc<Mutex<String>>,
-) {
- let default_response = indoc! {"
- ```main.rs
- <|start_of_file|>
- <|editable_region_start|>
- hello world
- <|editable_region_end|>
- ```"
- };
- let captured_request: Arc<Mutex<Option<PredictEditsBody>>> = Arc::new(Mutex::new(None));
- let completion_response: Arc<Mutex<String>> =
- Arc::new(Mutex::new(default_response.to_string()));
+) -> (Entity<EditPredictionStore>, Arc<Mutex<String>>) {
+ let default_response = "hello world\n".to_string();
+ let completion_response: Arc<Mutex<String>> = Arc::new(Mutex::new(default_response));
let http_client = FakeHttpClient::create({
- let captured_request = captured_request.clone();
let completion_response = completion_response.clone();
let mut next_request_id = 0;
move |req| {
- let captured_request = captured_request.clone();
let completion_response = completion_response.clone();
async move {
match (req.method(), req.uri().path()) {
@@ -2056,24 +1991,6 @@ async fn make_test_ep_store(
.into(),
)
.unwrap()),
- (&Method::POST, "/predict_edits/v2") => {
- let mut request_body = String::new();
- req.into_body().read_to_string(&mut request_body).await?;
- *captured_request.lock() =
- Some(serde_json::from_str(&request_body).unwrap());
- next_request_id += 1;
- Ok(http_client::Response::builder()
- .status(200)
- .body(
- serde_json::to_string(&PredictEditsResponse {
- request_id: format!("request-{next_request_id}"),
- output_excerpt: completion_response.lock().clone(),
- })
- .unwrap()
- .into(),
- )
- .unwrap())
- }
(&Method::POST, "/predict_edits/v3") => {
next_request_id += 1;
Ok(http_client::Response::builder()
@@ -2081,7 +1998,7 @@ async fn make_test_ep_store(
.body(
serde_json::to_string(&PredictEditsV3Response {
request_id: format!("request-{next_request_id}"),
- output: "hello world".to_string(),
+ output: completion_response.lock().clone(),
})
.unwrap()
.into(),
@@ -2120,7 +2037,7 @@ async fn make_test_ep_store(
ep_store
});
- (ep_store, captured_request, completion_response)
+ (ep_store, completion_response)
}
fn to_completion_edits(
@@ -2261,7 +2178,7 @@ fn test_compute_diff_between_snapshots(cx: &mut TestAppContext) {
let new_snapshot = buffer.read_with(cx, |buffer, _| buffer.text_snapshot());
- let diff = compute_diff_between_snapshots(&old_snapshot, &new_snapshot).unwrap();
+ let (diff, _) = compute_diff_between_snapshots(&old_snapshot, &new_snapshot).unwrap();
assert_eq!(
diff,
@@ -66,6 +66,7 @@ pub struct CapturedPromptInput {
pub excerpt_start_row: Option<u32>,
pub events: Vec<CapturedEvent>,
pub related_files: Vec<CapturedRelatedFile>,
+ pub in_open_source_repo: bool,
}
#[derive(Clone, Debug, PartialEq, Hash, Serialize, Deserialize)]
@@ -101,6 +102,7 @@ impl CapturedRelatedFile {
zeta_prompt::RelatedFile {
path: self.path.clone(),
max_row: self.max_row,
+ in_open_source_repo: false,
excerpts: self
.excerpts
.iter()
@@ -97,6 +97,9 @@ impl Mercury {
- context_offset_range.start)
..(editable_offset_range.end - context_offset_range.start),
excerpt_start_row: Some(context_start_row),
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
};
let prompt = build_prompt(&inputs);
@@ -169,6 +169,9 @@ impl Ollama {
- context_offset_range.start)
..(editable_offset_range.end - context_offset_range.start),
excerpt_start_row: Some(input_excerpt.context_range.start.row),
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
};
(prompt, stop_tokens, Some(editable_offset_range), inputs)
@@ -195,6 +198,9 @@ impl Ollama {
.text_for_range(excerpt_range)
.collect::<String>()
.into(),
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
};
let prefix = inputs.cursor_excerpt[..inputs.cursor_offset_in_excerpt].to_string();
@@ -158,6 +158,9 @@ mod tests {
cursor_excerpt: "".into(),
editable_range_in_excerpt: 0..0,
excerpt_start_row: None,
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
},
buffer_snapshotted_at: Instant::now(),
response_received_at: Instant::now(),
@@ -219,6 +219,9 @@ impl SweepAi {
editable_range_in_excerpt: 0..inputs.snapshot.len(),
cursor_offset_in_excerpt: request_body.cursor_position,
excerpt_start_row: Some(0),
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
};
send_started_event(
@@ -1,26 +1,13 @@
-use std::{fmt::Write, ops::Range, path::Path, sync::Arc, time::Instant};
+use std::{fmt::Write, ops::Range, sync::Arc};
-use crate::{
- DebugEvent, EditPredictionFinishedDebugEvent, EditPredictionId, EditPredictionModelInput,
- EditPredictionStartedDebugEvent, EditPredictionStore, ZedUpdateRequiredError,
- cursor_excerpt::{editable_and_context_ranges_for_cursor_position, guess_token_count},
- prediction::EditPredictionResult,
-};
+use crate::cursor_excerpt::{editable_and_context_ranges_for_cursor_position, guess_token_count};
use anyhow::Result;
-use cloud_llm_client::{
- PredictEditsBody, PredictEditsGitInfo, PredictEditsRequestTrigger, PredictEditsResponse,
-};
+use cloud_llm_client::PredictEditsBody;
use edit_prediction_types::PredictedCursorPosition;
-use gpui::{App, AppContext as _, AsyncApp, Context, Entity, SharedString, Task};
-use language::{
- Anchor, Buffer, BufferSnapshot, OffsetRangeExt as _, Point, ToOffset, ToPoint as _, text_diff,
-};
-use project::{Project, ProjectPath};
-use release_channel::AppVersion;
+use language::{Anchor, BufferSnapshot, Point, text_diff};
use text::Bias;
-use workspace::notifications::{ErrorMessagePrompt, NotificationId, show_app_notification};
use zeta_prompt::{
- Event, ZetaPromptInput,
+ Event,
zeta1::{
CURSOR_MARKER, EDITABLE_REGION_END_MARKER, EDITABLE_REGION_START_MARKER,
START_OF_FILE_MARKER,
@@ -28,260 +15,8 @@ use zeta_prompt::{
};
pub(crate) const MAX_CONTEXT_TOKENS: usize = 150;
-pub(crate) const MAX_REWRITE_TOKENS: usize = 350;
pub(crate) const MAX_EVENT_TOKENS: usize = 500;
-pub(crate) fn request_prediction_with_zeta1(
- store: &mut EditPredictionStore,
- EditPredictionModelInput {
- project,
- buffer,
- snapshot,
- position,
- events,
- trigger,
- debug_tx,
- ..
- }: EditPredictionModelInput,
- cx: &mut Context<EditPredictionStore>,
-) -> Task<Result<Option<EditPredictionResult>>> {
- let buffer_snapshotted_at = Instant::now();
- let client = store.client.clone();
- let llm_token = store.llm_token.clone();
- let app_version = AppVersion::global(cx);
-
- let (git_info, can_collect_file) = if let Some(file) = snapshot.file() {
- let can_collect_file = store.can_collect_file(&project, file, cx);
- let git_info = if can_collect_file {
- git_info_for_file(&project, &ProjectPath::from_file(file.as_ref(), cx), cx)
- } else {
- None
- };
- (git_info, can_collect_file)
- } else {
- (None, false)
- };
-
- let full_path: Arc<Path> = snapshot
- .file()
- .map(|f| Arc::from(f.full_path(cx).as_path()))
- .unwrap_or_else(|| Arc::from(Path::new("untitled")));
- let full_path_str = full_path.to_string_lossy().into_owned();
- let cursor_point = position.to_point(&snapshot);
- let prompt_for_events = {
- let events = events.clone();
- move || prompt_for_events_impl(&events, MAX_EVENT_TOKENS)
- };
- let gather_task = gather_context(
- full_path_str,
- &snapshot,
- cursor_point,
- prompt_for_events,
- trigger,
- cx,
- );
-
- let uri = match client
- .http_client()
- .build_zed_llm_url("/predict_edits/v2", &[])
- {
- Ok(url) => Arc::from(url),
- Err(err) => return Task::ready(Err(err)),
- };
-
- cx.spawn(async move |this, cx| {
- let GatherContextOutput {
- mut body,
- context_range,
- editable_range,
- included_events_count,
- } = gather_task.await?;
- let done_gathering_context_at = Instant::now();
-
- let included_events = &events[events.len() - included_events_count..events.len()];
- body.can_collect_data = can_collect_file
- && this
- .read_with(cx, |this, cx| this.can_collect_events(included_events, cx))
- .unwrap_or(false);
- if body.can_collect_data {
- body.git_info = git_info;
- }
-
- log::debug!(
- "Events:\n{}\nExcerpt:\n{:?}",
- body.input_events,
- body.input_excerpt
- );
-
- let response = EditPredictionStore::send_api_request::<PredictEditsResponse>(
- |request| {
- Ok(request
- .uri(uri.as_str())
- .body(serde_json::to_string(&body)?.into())?)
- },
- client,
- llm_token,
- app_version,
- true,
- )
- .await;
-
- let context_start_offset = context_range.start.to_offset(&snapshot);
- let context_start_row = context_range.start.row;
- let editable_offset_range = editable_range.to_offset(&snapshot);
-
- let inputs = ZetaPromptInput {
- events: included_events.into(),
- related_files: vec![],
- cursor_path: full_path,
- cursor_excerpt: snapshot
- .text_for_range(context_range)
- .collect::<String>()
- .into(),
- editable_range_in_excerpt: (editable_range.start - context_start_offset)
- ..(editable_offset_range.end - context_start_offset),
- cursor_offset_in_excerpt: cursor_point.to_offset(&snapshot) - context_start_offset,
- excerpt_start_row: Some(context_start_row),
- };
-
- if let Some(debug_tx) = &debug_tx {
- debug_tx
- .unbounded_send(DebugEvent::EditPredictionStarted(
- EditPredictionStartedDebugEvent {
- buffer: buffer.downgrade(),
- prompt: Some(serde_json::to_string(&inputs).unwrap()),
- position,
- },
- ))
- .ok();
- }
-
- let (response, usage) = match response {
- Ok(response) => response,
- Err(err) => {
- if err.is::<ZedUpdateRequiredError>() {
- cx.update(|cx| {
- this.update(cx, |ep_store, _cx| {
- ep_store.update_required = true;
- })
- .ok();
-
- let error_message: SharedString = err.to_string().into();
- show_app_notification(
- NotificationId::unique::<ZedUpdateRequiredError>(),
- cx,
- move |cx| {
- cx.new(|cx| {
- ErrorMessagePrompt::new(error_message.clone(), cx)
- .with_link_button("Update Zed", "https://zed.dev/releases")
- })
- },
- );
- });
- }
-
- return Err(err);
- }
- };
-
- let received_response_at = Instant::now();
- log::debug!("completion response: {}", &response.output_excerpt);
-
- if let Some(usage) = usage {
- this.update(cx, |this, cx| {
- this.user_store.update(cx, |user_store, cx| {
- user_store.update_edit_prediction_usage(usage, cx);
- });
- })
- .ok();
- }
-
- if let Some(debug_tx) = &debug_tx {
- debug_tx
- .unbounded_send(DebugEvent::EditPredictionFinished(
- EditPredictionFinishedDebugEvent {
- buffer: buffer.downgrade(),
- model_output: Some(response.output_excerpt.clone()),
- position,
- },
- ))
- .ok();
- }
-
- let edit_prediction = process_completion_response(
- response,
- buffer,
- &snapshot,
- editable_range,
- inputs,
- buffer_snapshotted_at,
- received_response_at,
- cx,
- )
- .await;
-
- let finished_at = Instant::now();
-
- // record latency for ~1% of requests
- if rand::random::<u8>() <= 2 {
- telemetry::event!(
- "Edit Prediction Request",
- context_latency = done_gathering_context_at
- .duration_since(buffer_snapshotted_at)
- .as_millis(),
- request_latency = received_response_at
- .duration_since(done_gathering_context_at)
- .as_millis(),
- process_latency = finished_at.duration_since(received_response_at).as_millis()
- );
- }
-
- edit_prediction.map(Some)
- })
-}
-
-fn process_completion_response(
- prediction_response: PredictEditsResponse,
- buffer: Entity<Buffer>,
- snapshot: &BufferSnapshot,
- editable_range: Range<usize>,
- inputs: ZetaPromptInput,
- buffer_snapshotted_at: Instant,
- received_response_at: Instant,
- cx: &AsyncApp,
-) -> Task<Result<EditPredictionResult>> {
- let snapshot = snapshot.clone();
- let request_id = prediction_response.request_id;
- let output_excerpt = prediction_response.output_excerpt;
- cx.spawn(async move |cx| {
- let output_excerpt: Arc<str> = output_excerpt.into();
-
- let edits: Arc<[(Range<Anchor>, Arc<str>)]> = cx
- .background_spawn({
- let output_excerpt = output_excerpt.clone();
- let editable_range = editable_range.clone();
- let snapshot = snapshot.clone();
- async move { parse_edits(output_excerpt.as_ref(), editable_range, &snapshot) }
- })
- .await?
- .into();
-
- let id = EditPredictionId(request_id.into());
- Ok(EditPredictionResult::new(
- id,
- &buffer,
- &snapshot,
- edits,
- None,
- buffer_snapshotted_at,
- received_response_at,
- inputs,
- cx,
- )
- .await)
- })
-}
-
pub(crate) fn parse_edits(
output_excerpt: &str,
editable_range: Range<usize>,
@@ -318,14 +53,32 @@ pub(crate) fn parse_edits(
let content_start = start_markers
.first()
- .map(|e| e.0 + EDITABLE_REGION_START_MARKER.len() + 1) // +1 to skip \n after marker
+ .map(|e| e.0 + EDITABLE_REGION_START_MARKER.len())
+ .map(|start| {
+ if content.len() > start
+ && content.is_char_boundary(start)
+ && content[start..].starts_with('\n')
+ {
+ start + 1
+ } else {
+ start
+ }
+ })
.unwrap_or(0);
let content_end = end_markers
.first()
- .map(|e| e.0.saturating_sub(1)) // -1 to exclude \n before marker
+ .map(|e| {
+ if e.0 > 0 && content.is_char_boundary(e.0 - 1) && content[e.0 - 1..].starts_with('\n')
+ {
+ e.0 - 1
+ } else {
+ e.0
+ }
+ })
.unwrap_or(content.strip_suffix("\n").unwrap_or(&content).len());
- // if there is a single newline between markers, content_start will be 1 more than content_end. .min ensures empty slice in that case
+ // min to account for content_end and content_start both accounting for the same newline in the following case:
+ // <|editable_region_start|>\n<|editable_region_end|>
let new_text = &content[content_start.min(content_end)..content_end];
let old_text = snapshot
@@ -365,6 +118,7 @@ pub fn compute_edits_and_cursor_position(
// new_offset = old_offset + delta, so old_offset = new_offset - delta
let mut delta: isize = 0;
let mut cursor_position: Option<PredictedCursorPosition> = None;
+ let buffer_len = snapshot.len();
let edits = diffs
.iter()
@@ -376,13 +130,15 @@ pub fn compute_edits_and_cursor_position(
if cursor_offset < edit_start_in_new {
let cursor_in_old = (cursor_offset as isize - delta) as usize;
+ let buffer_offset = (offset + cursor_in_old).min(buffer_len);
cursor_position = Some(PredictedCursorPosition::at_anchor(
- snapshot.anchor_after(offset + cursor_in_old),
+ snapshot.anchor_after(buffer_offset),
));
} else if cursor_offset < edit_end_in_new {
+ let buffer_offset = (offset + raw_old_range.start).min(buffer_len);
let offset_within_insertion = cursor_offset - edit_start_in_new;
cursor_position = Some(PredictedCursorPosition::new(
- snapshot.anchor_before(offset + raw_old_range.start),
+ snapshot.anchor_before(buffer_offset),
offset_within_insertion,
));
}
@@ -405,6 +161,9 @@ pub fn compute_edits_and_cursor_position(
old_range.start += prefix_len;
old_range.end -= suffix_len;
+ old_range.start = old_range.start.min(buffer_len);
+ old_range.end = old_range.end.min(buffer_len);
+
let new_text = new_text[prefix_len..new_text.len() - suffix_len].into();
let range = if old_range.is_empty() {
let anchor = snapshot.anchor_after(old_range.start);
@@ -434,35 +193,6 @@ fn common_prefix<T1: Iterator<Item = char>, T2: Iterator<Item = char>>(a: T1, b:
.sum()
}
-fn git_info_for_file(
- project: &Entity<Project>,
- project_path: &ProjectPath,
- cx: &App,
-) -> Option<PredictEditsGitInfo> {
- let git_store = project.read(cx).git_store().read(cx);
- if let Some((repository, _repo_path)) =
- git_store.repository_and_path_for_project_path(project_path, cx)
- {
- let repository = repository.read(cx);
- let head_sha = repository
- .head_commit
- .as_ref()
- .map(|head_commit| head_commit.sha.to_string());
- let remote_origin_url = repository.remote_origin_url.clone();
- let remote_upstream_url = repository.remote_upstream_url.clone();
- if head_sha.is_none() && remote_origin_url.is_none() && remote_upstream_url.is_none() {
- return None;
- }
- Some(PredictEditsGitInfo {
- head_sha,
- remote_origin_url,
- remote_upstream_url,
- })
- } else {
- None
- }
-}
-
pub struct GatherContextOutput {
pub body: PredictEditsBody,
pub context_range: Range<Point>,
@@ -470,48 +200,6 @@ pub struct GatherContextOutput {
pub included_events_count: usize,
}
-pub fn gather_context(
- full_path_str: String,
- snapshot: &BufferSnapshot,
- cursor_point: language::Point,
- prompt_for_events: impl FnOnce() -> (String, usize) + Send + 'static,
- trigger: PredictEditsRequestTrigger,
- cx: &App,
-) -> Task<Result<GatherContextOutput>> {
- cx.background_spawn({
- let snapshot = snapshot.clone();
- async move {
- let input_excerpt = excerpt_for_cursor_position(
- cursor_point,
- &full_path_str,
- &snapshot,
- MAX_REWRITE_TOKENS,
- MAX_CONTEXT_TOKENS,
- );
- let (input_events, included_events_count) = prompt_for_events();
- let editable_range = input_excerpt.editable_range.to_offset(&snapshot);
-
- let body = PredictEditsBody {
- input_events,
- input_excerpt: input_excerpt.prompt,
- can_collect_data: false,
- diagnostic_groups: None,
- git_info: None,
- outline: None,
- speculated_output: None,
- trigger,
- };
-
- Ok(GatherContextOutput {
- body,
- context_range: input_excerpt.context_range,
- editable_range,
- included_events_count,
- })
- }
- })
-}
-
pub(crate) fn prompt_for_events(events: &[Arc<Event>], max_tokens: usize) -> String {
prompt_for_events_impl(events, max_tokens).0
}
@@ -638,6 +326,7 @@ mod tests {
use gpui::{App, AppContext};
use indoc::indoc;
use language::Buffer;
+ use text::OffsetRangeExt as _;
#[gpui::test]
fn test_excerpt_for_cursor_position(cx: &mut App) {
@@ -733,4 +422,30 @@ mod tests {
assert_eq!(range.to_offset(&snapshot), 0..text.len(),);
assert_eq!(new_text.as_ref(), "");
}
+
+ #[gpui::test]
+ fn test_parse_edits_multibyte_char_before_end_marker(cx: &mut App) {
+ let text = "// café";
+ let buffer = cx.new(|cx| Buffer::local(text, cx));
+ let snapshot = buffer.read(cx).snapshot();
+
+ let output = "<|editable_region_start|>\n// café<|editable_region_end|>";
+ let editable_range = 0..text.len();
+
+ let edits = parse_edits(output, editable_range, &snapshot).unwrap();
+ assert_eq!(edits, vec![]);
+ }
+
+ #[gpui::test]
+ fn test_parse_edits_multibyte_char_after_start_marker(cx: &mut App) {
+ let text = "é is great";
+ let buffer = cx.new(|cx| Buffer::local(text, cx));
+ let snapshot = buffer.read(cx).snapshot();
+
+ let output = "<|editable_region_start|>é is great\n<|editable_region_end|>";
+ let editable_range = 0..text.len();
+
+ let edits = parse_edits(output, editable_range, &snapshot).unwrap();
+ assert!(edits.is_empty());
+ }
}
@@ -1,10 +1,11 @@
+use crate::cursor_excerpt::{compute_excerpt_ranges, excerpt_ranges_to_byte_offsets};
use crate::prediction::EditPredictionResult;
use crate::zeta1::compute_edits_and_cursor_position;
use crate::{
CurrentEditPrediction, DebugEvent, EditPredictionFinishedDebugEvent, EditPredictionId,
EditPredictionModelInput, EditPredictionStartedDebugEvent, EditPredictionStore,
};
-use anyhow::{Result, anyhow};
+use anyhow::Result;
use cloud_llm_client::predict_edits_v3::RawCompletionRequest;
use cloud_llm_client::{AcceptEditPredictionBody, EditPredictionRejectReason};
use gpui::{App, Task, prelude::*};
@@ -13,8 +14,10 @@ use release_channel::AppVersion;
use std::env;
use std::{path::Path, sync::Arc, time::Instant};
-use zeta_prompt::{CURSOR_MARKER, ZetaFormat, clean_zeta2_model_output};
-use zeta_prompt::{format_zeta_prompt, get_prefill};
+use zeta_prompt::{
+ CURSOR_MARKER, EditPredictionModelKind, ZetaFormat, clean_zeta2_model_output,
+ format_zeta_prompt, get_prefill, prompt_input_contains_special_tokens,
+};
pub const MAX_CONTEXT_TOKENS: usize = 350;
@@ -39,24 +42,30 @@ pub fn request_prediction_with_zeta2(
events,
debug_tx,
trigger,
+ project,
..
}: EditPredictionModelInput,
+ preferred_model: Option<EditPredictionModelKind>,
cx: &mut Context<EditPredictionStore>,
) -> Task<Result<Option<EditPredictionResult>>> {
let buffer_snapshotted_at = Instant::now();
let raw_config = store.zeta2_raw_config().cloned();
- let Some(excerpt_path) = snapshot
+ let excerpt_path: Arc<Path> = snapshot
.file()
.map(|file| -> Arc<Path> { file.full_path(cx).into() })
- else {
- return Task::ready(Err(anyhow!("No file path for excerpt")));
- };
+ .unwrap_or_else(|| Arc::from(Path::new("untitled")));
let client = store.client.clone();
let llm_token = store.llm_token.clone();
let app_version = AppVersion::global(cx);
+ let is_open_source = snapshot
+ .file()
+ .map_or(false, |file| store.is_file_open_source(&project, file, cx))
+ && events.iter().all(|event| event.in_open_source_repo())
+ && related_files.iter().all(|file| file.in_open_source_repo);
+
let request_task = cx.background_spawn({
async move {
let zeta_version = raw_config
@@ -72,8 +81,14 @@ pub fn request_prediction_with_zeta2(
excerpt_path,
cursor_offset,
zeta_version,
+ preferred_model,
+ is_open_source,
);
+ if prompt_input_contains_special_tokens(&prompt_input, zeta_version) {
+ return Ok((None, None));
+ }
+
if let Some(debug_tx) = &debug_tx {
let prompt = format_zeta_prompt(&prompt_input, zeta_version);
debug_tx
@@ -248,41 +263,52 @@ pub fn zeta2_prompt_input(
excerpt_path: Arc<Path>,
cursor_offset: usize,
zeta_format: ZetaFormat,
+ preferred_model: Option<EditPredictionModelKind>,
+ is_open_source: bool,
) -> (std::ops::Range<usize>, zeta_prompt::ZetaPromptInput) {
let cursor_point = cursor_offset.to_point(snapshot);
- let (editable_range, context_range) =
- crate::cursor_excerpt::editable_and_context_ranges_for_cursor_position(
- cursor_point,
- snapshot,
- max_editable_tokens(zeta_format),
- MAX_CONTEXT_TOKENS,
- );
+ let (full_context, range_points) = compute_excerpt_ranges(cursor_point, snapshot);
let related_files = crate::filter_redundant_excerpts(
related_files,
excerpt_path.as_ref(),
- context_range.start.row..context_range.end.row,
+ full_context.start.row..full_context.end.row,
);
- let context_start_offset = context_range.start.to_offset(snapshot);
- let context_start_row = context_range.start.row;
+ let full_context_start_offset = full_context.start.to_offset(snapshot);
+ let full_context_start_row = full_context.start.row;
+
+ let excerpt_ranges =
+ excerpt_ranges_to_byte_offsets(&range_points, full_context_start_offset, snapshot);
+
+ let editable_range = match preferred_model {
+ Some(EditPredictionModelKind::Zeta1) => &range_points.editable_350,
+ _ => match zeta_format {
+ ZetaFormat::V0112MiddleAtEnd | ZetaFormat::V0113Ordered => &range_points.editable_150,
+ _ => &range_points.editable_180,
+ },
+ };
+
let editable_offset_range = editable_range.to_offset(snapshot);
- let cursor_offset_in_excerpt = cursor_offset - context_start_offset;
- let editable_range_in_excerpt = (editable_offset_range.start - context_start_offset)
- ..(editable_offset_range.end - context_start_offset);
+ let cursor_offset_in_excerpt = cursor_offset - full_context_start_offset;
+ let editable_range_in_excerpt = (editable_offset_range.start - full_context_start_offset)
+ ..(editable_offset_range.end - full_context_start_offset);
let prompt_input = zeta_prompt::ZetaPromptInput {
cursor_path: excerpt_path,
cursor_excerpt: snapshot
- .text_for_range(context_range)
+ .text_for_range(full_context)
.collect::<String>()
.into(),
editable_range_in_excerpt,
cursor_offset_in_excerpt,
- excerpt_start_row: Some(context_start_row),
+ excerpt_start_row: Some(full_context_start_row),
events,
related_files,
+ excerpt_ranges: Some(excerpt_ranges),
+ preferred_model,
+ in_open_source_repo: is_open_source,
};
(editable_offset_range, prompt_input)
}
@@ -93,6 +93,13 @@ pub async fn run_format_prompt(
excerpt_start_row: prompt_inputs.excerpt_start_row,
events: prompt_inputs.edit_history.clone(),
related_files: prompt_inputs.related_files.clone().unwrap_or_default(),
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: example
+ .spec
+ .captured_prompt_input
+ .as_ref()
+ .map_or(false, |input| input.in_open_source_repo),
};
let prompt = format_zeta_prompt(&input, version);
let prefill = zeta_prompt::get_prefill(&input, version);
@@ -1304,6 +1304,7 @@ fn build_example_from_snowflake(
excerpt_start_row: None,
events,
related_files,
+ in_open_source_repo: input.in_open_source_repo,
}),
telemetry: Some(TelemetrySource {
request_id,
@@ -136,11 +136,13 @@ impl RelatedExcerptStore {
.collect()
}
- pub fn related_files_with_buffers(&mut self, cx: &App) -> Vec<(RelatedFile, Entity<Buffer>)> {
+ pub fn related_files_with_buffers(
+ &mut self,
+ cx: &App,
+ ) -> impl Iterator<Item = (RelatedFile, Entity<Buffer>)> {
self.related_buffers
.iter_mut()
.map(|related| (related.related_file(cx), related.buffer.clone()))
- .collect::<Vec<_>>()
}
pub fn set_related_files(&mut self, files: Vec<RelatedFile>, cx: &App) {
@@ -424,6 +426,7 @@ impl RelatedBuffer {
path,
excerpts: cached.excerpts.clone(),
max_row: buffer.max_point().row,
+ in_open_source_repo: false,
};
return related_file;
}
@@ -89,7 +89,6 @@ async fn test_edit_prediction_context(cx: &mut TestAppContext) {
let company_buffer = related_excerpt_store.update(cx, |store, cx| {
store
.related_files_with_buffers(cx)
- .into_iter()
.find(|(file, _)| file.path.to_str() == Some("root/src/company.rs"))
.map(|(_, buffer)| buffer)
.expect("company.rs buffer not found")
@@ -153,9 +153,7 @@ fn capture_example_as_markdown(
.read(cx)
.text_anchor_for_position(editor.selections.newest_anchor().head(), cx)?;
let ep_store = EditPredictionStore::try_global(cx)?;
- let events = ep_store.update(cx, |store, cx| {
- store.edit_history_for_project_with_pause_split_last_event(&project, cx)
- });
+ let events = ep_store.update(cx, |store, cx| store.edit_history_for_project(&project, cx));
let example = capture_example(
project.clone(),
buffer,
@@ -1,4 +1,4 @@
-use std::ops::Range;
+use std::{cmp, ops::Range};
use collections::HashMap;
use futures::FutureExt;
@@ -8,7 +8,7 @@ use itertools::Itertools as _;
use language::language_settings::language_settings;
use language::{Buffer, BufferSnapshot, OutlineItem};
use multi_buffer::{Anchor, MultiBufferSnapshot};
-use text::{BufferId, OffsetRangeExt as _, ToOffset as _};
+use text::{Bias, BufferId, OffsetRangeExt as _, ToOffset as _};
use theme::{ActiveTheme as _, SyntaxTheme};
use crate::display_map::DisplaySnapshot;
@@ -292,10 +292,16 @@ fn highlights_from_buffer(
let range_end_offset = symbol_range.end;
// Try to find the name verbatim in the buffer near the selection range.
- let search_start = selection_start_offset
- .saturating_sub(name.len())
- .max(range_start_offset);
- let search_end = (selection_start_offset + name.len() * 2).min(range_end_offset);
+ let search_start = buffer_snapshot.clip_offset(
+ selection_start_offset
+ .saturating_sub(name.len())
+ .max(range_start_offset),
+ Bias::Right,
+ );
+ let search_end = buffer_snapshot.clip_offset(
+ cmp::min(selection_start_offset + name.len() * 2, range_end_offset),
+ Bias::Left,
+ );
if search_start < search_end {
let buffer_text: String = buffer_snapshot
@@ -319,6 +325,9 @@ fn highlights_from_buffer(
// Fallback: match word-by-word. Split the name on whitespace and find
// each word sequentially in the buffer's symbol range.
+ let range_start_offset = buffer_snapshot.clip_offset(range_start_offset, Bias::Right);
+ let range_end_offset = buffer_snapshot.clip_offset(range_end_offset, Bias::Left);
+
let mut highlights = Vec::new();
let mut got_any = false;
let buffer_text: String = buffer_snapshot
@@ -767,6 +776,86 @@ mod tests {
});
}
+ #[gpui::test]
+ async fn test_lsp_document_symbols_multibyte_highlights(cx: &mut TestAppContext) {
+ init_test(cx, |_| {});
+
+ update_test_language_settings(cx, |settings| {
+ settings.defaults.document_symbols = Some(DocumentSymbols::On);
+ });
+
+ let mut cx = EditorLspTestContext::new_rust(
+ lsp::ServerCapabilities {
+ document_symbol_provider: Some(lsp::OneOf::Left(true)),
+ ..lsp::ServerCapabilities::default()
+ },
+ cx,
+ )
+ .await;
+ let mut symbol_request = cx
+ .set_request_handler::<lsp::request::DocumentSymbolRequest, _, _>(
+ move |_, _, _| async move {
+ // Buffer: "/// αyzabc\nfn test() {}\n"
+ // Bytes 0-3: "/// ", bytes 4-5: α (2-byte UTF-8), bytes 6-11: "yzabc\n"
+ // Line 1 starts at byte 12: "fn test() {}"
+ //
+ // Symbol range includes doc comment (line 0-1).
+ // Selection points to "test" on line 1.
+ // enriched_symbol_text extracts "fn test" with source_range_for_text.start at byte 12.
+ // search_start = max(12 - 7, 0) = 5, which is INSIDE the 2-byte 'α' char.
+ Ok(Some(lsp::DocumentSymbolResponse::Nested(vec![
+ nested_symbol(
+ "test",
+ lsp::SymbolKind::FUNCTION,
+ lsp_range(0, 0, 1, 13), // includes doc comment
+ lsp_range(1, 3, 1, 7), // "test"
+ Vec::new(),
+ ),
+ ])))
+ },
+ );
+
+ // "/// αyzabc\n" = 12 bytes, then "fn test() {}\n"
+ // search_start = 12 - 7 = 5, which is byte 5 = second byte of 'α' (not a char boundary)
+ cx.set_state("/// αyzabc\nfn teˇst() {}\n");
+ assert!(symbol_request.next().await.is_some());
+ cx.run_until_parked();
+
+ cx.update_editor(|editor, _window, _cx| {
+ let (_, symbols) = editor
+ .outline_symbols_at_cursor
+ .as_ref()
+ .expect("Should have outline symbols");
+ assert_eq!(symbols.len(), 1);
+
+ let symbol = &symbols[0];
+ assert_eq!(symbol.text, "fn test");
+
+ // Verify all highlight ranges are valid byte boundaries in the text
+ for (range, _style) in &symbol.highlight_ranges {
+ assert!(
+ symbol.text.is_char_boundary(range.start),
+ "highlight range start {} is not a char boundary in {:?}",
+ range.start,
+ symbol.text
+ );
+ assert!(
+ symbol.text.is_char_boundary(range.end),
+ "highlight range end {} is not a char boundary in {:?}",
+ range.end,
+ symbol.text
+ );
+ assert!(
+ range.end <= symbol.text.len(),
+ "highlight range end {} exceeds text length {} for {:?}",
+ range.end,
+ symbol.text.len(),
+ symbol.text
+ );
+ }
+ });
+ }
+
#[gpui::test]
async fn test_lsp_document_symbols_empty_response(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -8023,10 +8023,6 @@ impl Editor {
match granularity {
EditPredictionGranularity::Full => {
- if let Some(provider) = self.edit_prediction_provider() {
- provider.accept(cx);
- }
-
let transaction_id_prev = self.buffer.read(cx).last_transaction_id(cx);
// Compute fallback cursor position BEFORE applying the edit,
@@ -8040,6 +8036,10 @@ impl Editor {
buffer.edit(edits.iter().cloned(), None, cx)
});
+ if let Some(provider) = self.edit_prediction_provider() {
+ provider.accept(cx);
+ }
+
// Resolve cursor position after the edit is applied
let cursor_target = if let Some((anchor, offset)) = cursor_position {
// The anchor tracks through the edit, then we add the offset
@@ -24254,10 +24254,10 @@ impl Editor {
.global_lsp_settings
.semantic_token_rules
.clone();
- if self
+ let semantic_token_rules_changed = self
.semantic_token_state
- .update_rules(new_semantic_token_rules)
- {
+ .update_rules(new_semantic_token_rules);
+ if language_settings_changed || semantic_token_rules_changed {
self.invalidate_semantic_tokens(None);
self.refresh_semantic_tokens(None, None, cx);
}
@@ -7521,7 +7521,7 @@ impl EditorElement {
let position_map: &PositionMap = &position_map;
let line_height = position_map.line_height;
- let max_glyph_advance = position_map.em_advance;
+ let glyph_width = position_map.em_layout_width;
let (delta, axis) = match delta {
gpui::ScrollDelta::Pixels(mut pixels) => {
//Trackpad
@@ -7531,17 +7531,15 @@ impl EditorElement {
gpui::ScrollDelta::Lines(lines) => {
//Not trackpad
- let pixels =
- point(lines.x * max_glyph_advance, lines.y * line_height);
+ let pixels = point(lines.x * glyph_width, lines.y * line_height);
(pixels, None)
}
};
let current_scroll_position = position_map.snapshot.scroll_position();
- let x = (current_scroll_position.x
- * ScrollPixelOffset::from(max_glyph_advance)
+ let x = (current_scroll_position.x * ScrollPixelOffset::from(glyph_width)
- ScrollPixelOffset::from(delta.x * scroll_sensitivity))
- / ScrollPixelOffset::from(max_glyph_advance);
+ / ScrollPixelOffset::from(glyph_width);
let y = (current_scroll_position.y * ScrollPixelOffset::from(line_height)
- ScrollPixelOffset::from(delta.y * scroll_sensitivity))
/ ScrollPixelOffset::from(line_height);
@@ -9539,6 +9537,7 @@ impl Element for EditorElement {
let line_height = style.text.line_height_in_pixels(rem_size);
let em_width = window.text_system().em_width(font_id, font_size).unwrap();
let em_advance = window.text_system().em_advance(font_id, font_size).unwrap();
+ let em_layout_width = window.text_system().em_layout_width(font_id, font_size);
let glyph_grid_cell = size(em_advance, line_height);
let gutter_dimensions =
@@ -9592,7 +9591,7 @@ impl Element for EditorElement {
let wrap_width = calculate_wrap_width(
editor.soft_wrap_mode(cx),
editor_width,
- em_advance,
+ em_layout_width,
);
if editor.set_wrap_width(wrap_width, cx) {
@@ -10248,7 +10247,7 @@ impl Element for EditorElement {
let scroll_max: gpui::Point<ScrollPixelOffset> = point(
ScrollPixelOffset::from(
- ((scroll_width - editor_width) / em_advance).max(0.0),
+ ((scroll_width - editor_width) / em_layout_width).max(0.0),
),
max_scroll_top,
);
@@ -10275,7 +10274,7 @@ impl Element for EditorElement {
});
let scroll_pixel_position = point(
- scroll_position.x * f64::from(em_advance),
+ scroll_position.x * f64::from(em_layout_width),
scroll_position.y * f64::from(line_height),
);
let sticky_headers = if !is_minimap
@@ -10779,6 +10778,7 @@ impl Element for EditorElement {
line_height,
em_width,
em_advance,
+ em_layout_width,
snapshot,
text_align: self.style.text.text_align,
content_width: text_hitbox.size.width,
@@ -11626,6 +11626,7 @@ pub(crate) struct PositionMap {
pub scroll_max: gpui::Point<ScrollOffset>,
pub em_width: Pixels,
pub em_advance: Pixels,
+ pub em_layout_width: Pixels,
pub visible_row_range: Range<DisplayRow>,
pub line_layouts: Vec<LineWithInvisibles>,
pub snapshot: EditorSnapshot,
@@ -11689,7 +11690,7 @@ impl PositionMap {
let scroll_position = self.snapshot.scroll_position();
let position = position - text_bounds.origin;
let y = position.y.max(px(0.)).min(self.size.height);
- let x = position.x + (scroll_position.x as f32 * self.em_advance);
+ let x = position.x + (scroll_position.x as f32 * self.em_layout_width);
let row = ((y / self.line_height) as f64 + scroll_position.y) as u32;
let (column, x_overshoot_after_line_end) = if let Some(line) = self
@@ -11711,7 +11712,8 @@ impl PositionMap {
let previous_valid = self.snapshot.clip_point(exact_unclipped, Bias::Left);
let next_valid = self.snapshot.clip_point(exact_unclipped, Bias::Right);
- let column_overshoot_after_line_end = (x_overshoot_after_line_end / self.em_advance) as u32;
+ let column_overshoot_after_line_end =
+ (x_overshoot_after_line_end / self.em_layout_width) as u32;
*exact_unclipped.column_mut() += column_overshoot_after_line_end;
PointForPosition {
previous_valid,
@@ -5,7 +5,6 @@ use futures::future::join_all;
use gpui::{
App, Context, FontStyle, FontWeight, HighlightStyle, StrikethroughStyle, Task, UnderlineStyle,
};
-use itertools::Itertools as _;
use language::language_settings::language_settings;
use project::{
lsp_store::{
@@ -165,8 +164,35 @@ impl Editor {
None
}
})
- .unique_by(|(buffer_id, _)| *buffer_id)
- .collect::<Vec<_>>();
+ .collect::<HashMap<_, _>>();
+
+ for buffer_with_disabled_tokens in self
+ .display_map
+ .read(cx)
+ .semantic_token_highlights
+ .iter()
+ .map(|(buffer_id, _)| *buffer_id)
+ .filter(|buffer_id| !buffers_to_query.contains_key(buffer_id))
+ .filter(|buffer_id| {
+ !self
+ .buffer
+ .read(cx)
+ .buffer(*buffer_id)
+ .is_some_and(|buffer| {
+ let buffer = buffer.read(cx);
+ language_settings(buffer.language().map(|l| l.name()), buffer.file(), cx)
+ .semantic_tokens
+ .enabled()
+ })
+ })
+ .collect::<Vec<_>>()
+ {
+ self.semantic_token_state
+ .invalidate_buffer(&buffer_with_disabled_tokens);
+ self.display_map.update(cx, |display_map, _| {
+ display_map.invalidate_semantic_highlights(buffer_with_disabled_tokens);
+ });
+ }
self.semantic_token_state.update_task = cx.spawn(async move |editor, cx| {
cx.background_executor()
@@ -393,7 +419,7 @@ fn convert_token(
SemanticTokenColorOverride::InheritForeground(false) => None,
SemanticTokenColorOverride::Replace(c) => Some(c.into()),
},
- ..Default::default()
+ ..UnderlineStyle::default()
}
});
@@ -451,7 +477,7 @@ mod tests {
"Rust".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
});
@@ -529,7 +555,7 @@ mod tests {
"Rust".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
});
@@ -605,7 +631,7 @@ mod tests {
"Rust".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
});
@@ -701,7 +727,7 @@ mod tests {
"TOML".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
});
@@ -711,9 +737,9 @@ mod tests {
name: "TOML".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["toml".into()],
- ..Default::default()
+ ..LanguageMatcher::default()
},
- ..Default::default()
+ ..LanguageConfig::default()
},
None,
));
@@ -921,14 +947,14 @@ mod tests {
"TOML".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
language_settings.languages.0.insert(
"Rust".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
});
@@ -938,9 +964,9 @@ mod tests {
name: "TOML".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["toml".into()],
- ..Default::default()
+ ..LanguageMatcher::default()
},
- ..Default::default()
+ ..LanguageConfig::default()
},
None,
));
@@ -949,9 +975,9 @@ mod tests {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".into()],
- ..Default::default()
+ ..LanguageMatcher::default()
},
- ..Default::default()
+ ..LanguageConfig::default()
},
None,
));
@@ -1205,7 +1231,7 @@ mod tests {
"TOML".into(),
LanguageSettingsContent {
semantic_tokens: Some(SemanticTokens::Full),
- ..Default::default()
+ ..LanguageSettingsContent::default()
},
);
});
@@ -1215,9 +1241,9 @@ mod tests {
name: "TOML".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["toml".into()],
- ..Default::default()
+ ..LanguageMatcher::default()
},
- ..Default::default()
+ ..LanguageConfig::default()
},
None,
));
@@ -1886,6 +1912,87 @@ mod tests {
);
}
+ #[gpui::test]
+ async fn test_disabling_semantic_tokens_setting_clears_highlights(cx: &mut TestAppContext) {
+ init_test(cx, |_| {});
+
+ update_test_language_settings(cx, |language_settings| {
+ language_settings.languages.0.insert(
+ "Rust".into(),
+ LanguageSettingsContent {
+ semantic_tokens: Some(SemanticTokens::Full),
+ ..LanguageSettingsContent::default()
+ },
+ );
+ });
+
+ let mut cx = EditorLspTestContext::new_rust(
+ lsp::ServerCapabilities {
+ semantic_tokens_provider: Some(
+ lsp::SemanticTokensServerCapabilities::SemanticTokensOptions(
+ lsp::SemanticTokensOptions {
+ legend: lsp::SemanticTokensLegend {
+ token_types: vec!["function".into()],
+ token_modifiers: Vec::new(),
+ },
+ full: Some(lsp::SemanticTokensFullOptions::Delta { delta: None }),
+ ..lsp::SemanticTokensOptions::default()
+ },
+ ),
+ ),
+ ..lsp::ServerCapabilities::default()
+ },
+ cx,
+ )
+ .await;
+
+ let mut full_request = cx
+ .set_request_handler::<lsp::request::SemanticTokensFullRequest, _, _>(
+ move |_, _, _| async move {
+ Ok(Some(lsp::SemanticTokensResult::Tokens(
+ lsp::SemanticTokens {
+ data: vec![
+ 0, // delta_line
+ 3, // delta_start
+ 4, // length
+ 0, // token_type
+ 0, // token_modifiers_bitset
+ ],
+ result_id: None,
+ },
+ )))
+ },
+ );
+
+ cx.set_state("ˇfn main() {}");
+ assert!(full_request.next().await.is_some());
+ cx.run_until_parked();
+
+ assert_eq!(
+ extract_semantic_highlights(&cx.editor, &cx),
+ vec![MultiBufferOffset(3)..MultiBufferOffset(7)],
+ "Semantic tokens should be present before disabling the setting"
+ );
+
+ update_test_language_settings(&mut cx, |language_settings| {
+ language_settings.languages.0.insert(
+ "Rust".into(),
+ LanguageSettingsContent {
+ semantic_tokens: Some(SemanticTokens::Off),
+ ..LanguageSettingsContent::default()
+ },
+ );
+ });
+ cx.executor().advance_clock(Duration::from_millis(200));
+ cx.run_until_parked();
+
+ assert_eq!(
+ extract_semantic_highlights(&cx.editor, &cx),
+ Vec::new(),
+ "Semantic tokens should be cleared after disabling the setting"
+ );
+ }
+
fn extract_semantic_highlight_styles(
editor: &Entity<Editor>,
cx: &TestAppContext,
@@ -15,7 +15,7 @@ This eval checks that the model doesn't use HTML escapement for characters like
original +system_prompt change +tool description
claude-opus-4 89% 92% 97%+
claude-sonnet-4 100%
- gpt-4.1-mini 100%
+ gpt-5-mini 100%
gemini-2.5-pro 98%
*/
@@ -26,6 +26,7 @@ feature_flags.workspace = true
git.workspace = true
git_ui.workspace = true
gpui.workspace = true
+menu.workspace = true
project.workspace = true
settings.workspace = true
smallvec.workspace = true
@@ -5,13 +5,14 @@ use git::{
parse_git_remote_url,
repository::{CommitDiff, InitialGraphCommitData, LogOrder, LogSource},
};
-use git_ui::commit_tooltip::CommitAvatar;
+use git_ui::{commit_tooltip::CommitAvatar, commit_view::CommitView};
use gpui::{
AnyElement, App, Bounds, ClipboardItem, Context, Corner, DefiniteLength, ElementId, Entity,
EventEmitter, FocusHandle, Focusable, FontWeight, Hsla, InteractiveElement, ParentElement,
- PathBuilder, Pixels, Point, Render, ScrollWheelEvent, SharedString, Styled, Subscription, Task,
- WeakEntity, Window, actions, anchored, deferred, point, px,
+ PathBuilder, Pixels, Point, Render, ScrollStrategy, ScrollWheelEvent, SharedString, Styled,
+ Subscription, Task, WeakEntity, Window, actions, anchored, deferred, point, px,
};
+use menu::{SelectNext, SelectPrevious};
use project::{
Project,
git_store::{CommitDataState, GitStoreEvent, Repository, RepositoryEvent},
@@ -30,12 +31,6 @@ use workspace::{
item::{Item, ItemEvent, SerializableItem},
};
-pub struct GitGraphFeatureFlag;
-
-impl FeatureFlag for GitGraphFeatureFlag {
- const NAME: &'static str = "git-graph";
-}
-
const COMMIT_CIRCLE_RADIUS: Pixels = px(4.5);
const COMMIT_CIRCLE_STROKE_WIDTH: Pixels = px(1.5);
const LANE_WIDTH: Pixels = px(16.0);
@@ -45,13 +40,17 @@ const LINE_WIDTH: Pixels = px(1.5);
actions!(
git_graph,
[
- /// Opens the Git Graph panel.
- Open,
/// Opens the commit view for the selected commit.
OpenCommitView,
]
);
+pub struct GitGraphFeatureFlag;
+
+impl FeatureFlag for GitGraphFeatureFlag {
+ const NAME: &'static str = "git-graph";
+}
+
fn timestamp_format() -> &'static [BorrowedFormatItem<'static>] {
static FORMAT: OnceLock<Vec<BorrowedFormatItem<'static>>> = OnceLock::new();
FORMAT.get_or_init(|| {
@@ -508,11 +507,19 @@ pub fn init(cx: &mut App) {
|div| {
let workspace = workspace.weak_handle();
- div.on_action(move |_: &Open, window, cx| {
+ div.on_action(move |_: &git_ui::git_panel::Open, window, cx| {
workspace
.update(cx, |workspace, cx| {
+ let existing = workspace.items_of_type::<GitGraph>(cx).next();
+ if let Some(existing) = existing {
+ workspace.activate_item(&existing, true, true, window, cx);
+ return;
+ }
+
let project = workspace.project().clone();
- let git_graph = cx.new(|cx| GitGraph::new(project, window, cx));
+ let workspace_handle = workspace.weak_handle();
+ let git_graph = cx
+ .new(|cx| GitGraph::new(project, workspace_handle, window, cx));
workspace.add_item_to_active_pane(
Box::new(git_graph),
None,
@@ -578,6 +585,7 @@ pub struct GitGraph {
focus_handle: FocusHandle,
graph_data: GraphData,
project: Entity<Project>,
+ workspace: WeakEntity<Workspace>,
context_menu: Option<(Entity<ContextMenu>, Point<Pixels>, Subscription)>,
row_height: Pixels,
table_interaction_state: Entity<TableInteractionState>,
@@ -603,7 +611,12 @@ impl GitGraph {
(LANE_WIDTH * self.graph_data.max_lanes.min(8) as f32) + LEFT_PADDING * 2.0
}
- pub fn new(project: Entity<Project>, window: &mut Window, cx: &mut Context<Self>) -> Self {
+ pub fn new(
+ project: Entity<Project>,
+ workspace: WeakEntity<Workspace>,
+ window: &mut Window,
+ cx: &mut Context<Self>,
+ ) -> Self {
let focus_handle = cx.focus_handle();
cx.on_focus(&focus_handle, window, |_, _, cx| cx.notify())
.detach();
@@ -661,6 +674,7 @@ impl GitGraph {
GitGraph {
focus_handle,
project,
+ workspace,
graph_data: graph,
_load_task: None,
_commit_diff_task: None,
@@ -844,6 +858,22 @@ impl GitGraph {
.collect()
}
+ fn select_prev(&mut self, _: &SelectPrevious, _window: &mut Window, cx: &mut Context<Self>) {
+ if let Some(selected_entry_idx) = &self.selected_entry_idx {
+ self.select_entry(selected_entry_idx.saturating_sub(1), cx);
+ } else {
+ self.select_entry(0, cx);
+ }
+ }
+
+ fn select_next(&mut self, _: &SelectNext, window: &mut Window, cx: &mut Context<Self>) {
+ if let Some(selected_entry_idx) = &self.selected_entry_idx {
+ self.select_entry(selected_entry_idx.saturating_add(1), cx);
+ } else {
+ self.select_prev(&SelectPrevious, window, cx);
+ }
+ }
+
fn select_entry(&mut self, idx: usize, cx: &mut Context<Self>) {
if self.selected_entry_idx == Some(idx) {
return;
@@ -851,6 +881,12 @@ impl GitGraph {
self.selected_entry_idx = Some(idx);
self.selected_commit_diff = None;
+ self.table_interaction_state.update(cx, |state, cx| {
+ state
+ .scroll_handle
+ .scroll_to_item(idx, ScrollStrategy::Nearest);
+ cx.notify();
+ });
let Some(commit) = self.graph_data.commits.get(idx) else {
return;
@@ -880,6 +916,43 @@ impl GitGraph {
cx.notify();
}
+ fn open_selected_commit_view(&mut self, window: &mut Window, cx: &mut Context<Self>) {
+ let Some(selected_entry_index) = self.selected_entry_idx else {
+ return;
+ };
+
+ self.open_commit_view(selected_entry_index, window, cx);
+ }
+
+ fn open_commit_view(
+ &mut self,
+ entry_index: usize,
+ window: &mut Window,
+ cx: &mut Context<Self>,
+ ) {
+ let Some(commit_entry) = self.graph_data.commits.get(entry_index) else {
+ return;
+ };
+
+ let repository = self
+ .project
+ .read_with(cx, |project, cx| project.active_repository(cx));
+
+ let Some(repository) = repository else {
+ return;
+ };
+
+ CommitView::open(
+ commit_entry.data.sha.to_string(),
+ repository.downgrade(),
+ self.workspace.clone(),
+ None,
+ None,
+ window,
+ cx,
+ );
+ }
+
fn get_remote(
&self,
repository: &Repository,
@@ -1579,9 +1652,13 @@ impl Render for GitGraph {
.when(is_selected, |row| {
row.bg(cx.theme().colors().element_selected)
})
- .on_click(move |_, _, cx| {
+ .on_click(move |event, window, cx| {
+ let click_count = event.click_count();
weak.update(cx, |this, cx| {
this.select_entry(index, cx);
+ if click_count >= 2 {
+ this.open_commit_view(index, window, cx);
+ }
})
.ok();
})
@@ -1604,6 +1681,11 @@ impl Render for GitGraph {
.bg(cx.theme().colors().editor_background)
.key_context("GitGraph")
.track_focus(&self.focus_handle)
+ .on_action(cx.listener(|this, _: &OpenCommitView, window, cx| {
+ this.open_selected_commit_view(window, cx);
+ }))
+ .on_action(cx.listener(Self::select_prev))
+ .on_action(cx.listener(Self::select_next))
.child(content)
.children(self.context_menu.as_ref().map(|(menu, position, _)| {
deferred(
@@ -1663,7 +1745,7 @@ impl SerializableItem for GitGraph {
fn deserialize(
project: Entity<Project>,
- _: WeakEntity<Workspace>,
+ workspace: WeakEntity<Workspace>,
workspace_id: workspace::WorkspaceId,
item_id: workspace::ItemId,
window: &mut Window,
@@ -1674,7 +1756,7 @@ impl SerializableItem for GitGraph {
.ok()
.is_some_and(|is_open| is_open)
{
- let git_graph = cx.new(|cx| GitGraph::new(project, window, cx));
+ let git_graph = cx.new(|cx| GitGraph::new(project, workspace, window, cx));
Task::ready(Ok(git_graph))
} else {
Task::ready(Err(anyhow::anyhow!("No git graph to deserialize")))
@@ -78,6 +78,7 @@ use workspace::{
dock::{DockPosition, Panel, PanelEvent},
notifications::{DetachAndPromptErr, ErrorMessagePrompt, NotificationId, NotifyResultExt},
};
+
actions!(
git_panel,
[
@@ -112,6 +113,14 @@ actions!(
]
);
+actions!(
+ git_graph,
+ [
+ /// Opens the Git Graph Tab.
+ Open,
+ ]
+);
+
fn prompt<T>(
msg: &str,
detail: Option<&str>,
@@ -4448,7 +4457,11 @@ impl GitPanel {
)
}
- fn render_previous_commit(&self, cx: &mut Context<Self>) -> Option<impl IntoElement> {
+ fn render_previous_commit(
+ &self,
+ window: &mut Window,
+ cx: &mut Context<Self>,
+ ) -> Option<impl IntoElement> {
let active_repository = self.active_repository.as_ref()?;
let branch = active_repository.read(cx).branch.as_ref()?;
let commit = branch.most_recent_commit.as_ref()?.clone();
@@ -4507,22 +4520,37 @@ impl GitPanel {
.when(commit.has_parent, |this| {
let has_unstaged = self.has_unstaged_changes();
this.pr_2().child(
- panel_icon_button("undo", IconName::Undo)
+ h_flex().gap_1().child(
+ panel_icon_button("undo", IconName::Undo)
+ .icon_size(IconSize::XSmall)
+ .icon_color(Color::Muted)
+ .tooltip(move |_window, cx| {
+ Tooltip::with_meta(
+ "Uncommit",
+ Some(&git::Uncommit),
+ if has_unstaged {
+ "git reset HEAD^ --soft"
+ } else {
+ "git reset HEAD^"
+ },
+ cx,
+ )
+ })
+ .on_click(
+ cx.listener(|this, _, window, cx| this.uncommit(window, cx)),
+ ),
+ ),
+ )
+ })
+ .when(window.is_action_available(&Open, cx), |this| {
+ this.child(
+ panel_icon_button("git-graph-button", IconName::ListTree)
.icon_size(IconSize::XSmall)
.icon_color(Color::Muted)
- .tooltip(move |_window, cx| {
- Tooltip::with_meta(
- "Uncommit",
- Some(&git::Uncommit),
- if has_unstaged {
- "git reset HEAD^ --soft"
- } else {
- "git reset HEAD^"
- },
- cx,
- )
- })
- .on_click(cx.listener(|this, _, window, cx| this.uncommit(window, cx))),
+ .tooltip(|_window, cx| Tooltip::for_action("Open Git Graph", &Open, cx))
+ .on_click(|_, window, cx| {
+ window.dispatch_action(Open.boxed_clone(), cx)
+ }),
)
}),
)
@@ -5513,7 +5541,7 @@ impl Render for GitPanel {
this.child(self.render_pending_amend(cx))
})
.when(!self.amend_pending, |this| {
- this.children(self.render_previous_commit(cx))
+ this.children(self.render_previous_commit(window, cx))
})
.into_any_element(),
)
@@ -29,19 +29,9 @@ test-support = [
inspector = ["gpui_macros/inspector"]
leak-detection = ["backtrace"]
runtime_shaders = []
-macos-blade = [
- "blade-graphics",
- "blade-macros",
- "blade-util",
- "bytemuck",
- "objc2",
- "objc2-metal",
-]
wayland = [
"bitflags",
- "blade-graphics",
- "blade-macros",
- "blade-util",
+ "wgpu",
"bytemuck",
"ashpd/wayland",
"cosmic-text",
@@ -58,9 +48,7 @@ wayland = [
"open",
]
x11 = [
- "blade-graphics",
- "blade-macros",
- "blade-util",
+ "wgpu",
"bytemuck",
"ashpd",
"cosmic-text",
@@ -88,9 +76,6 @@ anyhow.workspace = true
async-task = "4.7"
backtrace = { workspace = true, optional = true }
bitflags = { workspace = true, optional = true }
-blade-graphics = { workspace = true, optional = true }
-blade-macros = { workspace = true, optional = true }
-blade-util = { workspace = true, optional = true }
bytemuck = { version = "1", optional = true }
collections.workspace = true
ctor.workspace = true
@@ -178,20 +163,17 @@ oo7 = { version = "0.5.0", default-features = false, features = [
# Used in both windowing options
ashpd = { workspace = true, optional = true }
-blade-graphics = { workspace = true, optional = true }
-blade-macros = { workspace = true, optional = true }
-blade-util = { workspace = true, optional = true }
-bytemuck = { version = "1", optional = true }
+wgpu = { workspace = true, optional = true }
cosmic-text = { version = "0.17.0", optional = true }
swash = { version = "0.2.6" }
# WARNING: If you change this, you must also publish a new version of zed-font-kit to crates.io
font-kit = { git = "https://github.com/zed-industries/font-kit", rev = "110523127440aefb11ce0cf280ae7c5071337ec5", package = "zed-font-kit", version = "0.14.1-zed", features = [
"source-fontconfig-dlopen",
], optional = true }
-
-calloop = { version = "0.14.3" }
+calloop = "0.14.3"
filedescriptor = { version = "0.8.2", optional = true }
open = { version = "5.2.0", optional = true }
+xkbcommon = { version = "0.8.0", features = ["wayland", "x11"], optional = true }
# Wayland
calloop-wayland-source = { version = "0.4.1", optional = true }
@@ -224,10 +206,6 @@ x11rb = { version = "0.13.1", features = [
"resource_manager",
"sync",
], optional = true }
-xkbcommon = { version = "0.8.0", features = [
- "wayland",
- "x11",
-], optional = true }
# WARNING: If you change this, you must also publish a new version of zed-xim to crates.io
xim = { git = "https://github.com/zed-industries/xim-rs.git", rev = "16f35a2c881b815a2b6cdfd6687988e84f8447d8" , features = [
"x11rb-xcb",
@@ -1,8 +1,5 @@
#![allow(clippy::disallowed_methods, reason = "build scripts are exempt")]
-#![cfg_attr(any(not(target_os = "macos"), feature = "macos-blade"), allow(unused))]
-
-//TODO: consider generating shader code for WGSL
-//TODO: deprecate "runtime-shaders" and "macos-blade"
+#![cfg_attr(not(target_os = "macos"), allow(unused))]
use std::env;
@@ -10,12 +7,6 @@ fn main() {
let target = env::var("CARGO_CFG_TARGET_OS");
println!("cargo::rustc-check-cfg=cfg(gles)");
- #[cfg(any(
- not(any(target_os = "macos", target_os = "windows")),
- all(target_os = "macos", feature = "macos-blade")
- ))]
- check_wgsl_shaders();
-
match target.as_deref() {
Ok("macos") => {
#[cfg(target_os = "macos")]
@@ -28,32 +19,6 @@ fn main() {
_ => (),
};
}
-
-#[cfg(any(
- not(any(target_os = "macos", target_os = "windows")),
- all(target_os = "macos", feature = "macos-blade")
-))]
-fn check_wgsl_shaders() {
- use std::path::PathBuf;
- use std::process;
- use std::str::FromStr;
-
- let shader_source_path = "./src/platform/blade/shaders.wgsl";
- let shader_path = PathBuf::from_str(shader_source_path).unwrap();
- println!("cargo:rerun-if-changed={}", &shader_path.display());
-
- let shader_source = std::fs::read_to_string(&shader_path).unwrap();
-
- match naga::front::wgsl::parse_str(&shader_source) {
- Ok(_) => {
- // All clear
- }
- Err(e) => {
- println!("cargo::error=WGSL shader compilation failed:\n{}", e);
- process::exit(1);
- }
- }
-}
#[cfg(target_os = "macos")]
mod macos {
use std::{
@@ -65,15 +30,13 @@ mod macos {
pub(super) fn build() {
generate_dispatch_bindings();
- #[cfg(not(feature = "macos-blade"))]
- {
- let header_path = generate_shader_bindings();
- #[cfg(feature = "runtime_shaders")]
- emit_stitched_shaders(&header_path);
- #[cfg(not(feature = "runtime_shaders"))]
- compile_metal_shaders(&header_path);
- }
+ let header_path = generate_shader_bindings();
+
+ #[cfg(feature = "runtime_shaders")]
+ emit_stitched_shaders(&header_path);
+ #[cfg(not(feature = "runtime_shaders"))]
+ compile_metal_shaders(&header_path);
}
fn generate_dispatch_bindings() {
@@ -697,11 +697,19 @@ impl<'a, T: 'static> Context<'a, T> {
let (subscription, activate) = self.global_observers.insert(
TypeId::of::<G>(),
Box::new(move |cx| {
- window_handle
- .update(cx, |_, window, cx| {
- view.update(cx, |view, cx| f(view, window, cx)).is_ok()
- })
- .unwrap_or(false)
+ // If the entity has been dropped, remove this observer.
+ if view.upgrade().is_none() {
+ return false;
+ }
+ // If the window is unavailable (e.g. temporarily taken during a
+ // nested update, or already closed), skip this notification but
+ // keep the observer alive so it can fire on future changes.
+ let Ok(entity_alive) = window_handle.update(cx, |_, window, cx| {
+ view.update(cx, |view, cx| f(view, window, cx)).is_ok()
+ }) else {
+ return true;
+ };
+ entity_alive
}),
);
self.defer(move |_| activate());
@@ -8,14 +8,11 @@ mod linux;
#[cfg(target_os = "macos")]
mod mac;
-#[cfg(any(
- all(
- any(target_os = "linux", target_os = "freebsd"),
- any(feature = "x11", feature = "wayland")
- ),
- all(target_os = "macos", feature = "macos-blade")
+#[cfg(all(
+ any(target_os = "linux", target_os = "freebsd"),
+ any(feature = "wayland", feature = "x11")
))]
-mod blade;
+mod wgpu;
#[cfg(any(test, feature = "test-support"))]
mod test;
@@ -28,13 +25,7 @@ mod windows;
#[cfg(all(
feature = "screen-capture",
- any(
- target_os = "windows",
- all(
- any(target_os = "linux", target_os = "freebsd"),
- any(feature = "wayland", feature = "x11"),
- )
- )
+ any(target_os = "windows", target_os = "linux", target_os = "freebsd",)
))]
pub(crate) mod scap_screen_capture;
@@ -1,11 +0,0 @@
-#[cfg(target_os = "macos")]
-mod apple_compat;
-mod blade_atlas;
-mod blade_context;
-mod blade_renderer;
-
-#[cfg(target_os = "macos")]
-pub(crate) use apple_compat::*;
-pub(crate) use blade_atlas::*;
-pub(crate) use blade_context::*;
-pub(crate) use blade_renderer::*;
@@ -1,60 +0,0 @@
-use super::{BladeContext, BladeRenderer, BladeSurfaceConfig};
-use blade_graphics as gpu;
-use std::{ffi::c_void, ptr::NonNull};
-
-#[derive(Clone)]
-pub struct Context {
- inner: BladeContext,
-}
-impl Default for Context {
- fn default() -> Self {
- Self {
- inner: BladeContext::new().unwrap(),
- }
- }
-}
-
-pub type Renderer = BladeRenderer;
-
-pub unsafe fn new_renderer(
- context: Context,
- _native_window: *mut c_void,
- native_view: *mut c_void,
- bounds: crate::Size<f32>,
- transparent: bool,
-) -> Renderer {
- use raw_window_handle as rwh;
- struct RawWindow {
- view: *mut c_void,
- }
-
- impl rwh::HasWindowHandle for RawWindow {
- fn window_handle(&self) -> Result<rwh::WindowHandle<'_>, rwh::HandleError> {
- let view = NonNull::new(self.view).unwrap();
- let handle = rwh::AppKitWindowHandle::new(view);
- Ok(unsafe { rwh::WindowHandle::borrow_raw(handle.into()) })
- }
- }
- impl rwh::HasDisplayHandle for RawWindow {
- fn display_handle(&self) -> Result<rwh::DisplayHandle<'_>, rwh::HandleError> {
- let handle = rwh::AppKitDisplayHandle::new();
- Ok(unsafe { rwh::DisplayHandle::borrow_raw(handle.into()) })
- }
- }
-
- BladeRenderer::new(
- &context.inner,
- &RawWindow {
- view: native_view as *mut _,
- },
- BladeSurfaceConfig {
- size: gpu::Extent {
- width: bounds.width as u32,
- height: bounds.height as u32,
- depth: 1,
- },
- transparent,
- },
- )
- .unwrap()
-}
@@ -1,395 +0,0 @@
-use crate::{
- AtlasKey, AtlasTextureId, AtlasTextureKind, AtlasTile, Bounds, DevicePixels, PlatformAtlas,
- Point, Size, platform::AtlasTextureList,
-};
-use anyhow::Result;
-use blade_graphics as gpu;
-use blade_util::{BufferBelt, BufferBeltDescriptor};
-use collections::FxHashMap;
-use etagere::BucketedAtlasAllocator;
-use parking_lot::Mutex;
-use std::{borrow::Cow, ops, sync::Arc};
-
-pub(crate) struct BladeAtlas(Mutex<BladeAtlasState>);
-
-struct PendingUpload {
- id: AtlasTextureId,
- bounds: Bounds<DevicePixels>,
- data: gpu::BufferPiece,
-}
-
-struct BladeAtlasState {
- gpu: Arc<gpu::Context>,
- upload_belt: BufferBelt,
- storage: BladeAtlasStorage,
- tiles_by_key: FxHashMap<AtlasKey, AtlasTile>,
- initializations: Vec<AtlasTextureId>,
- uploads: Vec<PendingUpload>,
-}
-
-#[cfg(gles)]
-unsafe impl Send for BladeAtlasState {}
-
-impl BladeAtlasState {
- fn destroy(&mut self) {
- self.storage.destroy(&self.gpu);
- self.upload_belt.destroy(&self.gpu);
- }
-}
-
-pub struct BladeTextureInfo {
- pub raw_view: gpu::TextureView,
-}
-
-impl BladeAtlas {
- pub(crate) fn new(gpu: &Arc<gpu::Context>) -> Self {
- BladeAtlas(Mutex::new(BladeAtlasState {
- gpu: Arc::clone(gpu),
- upload_belt: BufferBelt::new(BufferBeltDescriptor {
- memory: gpu::Memory::Upload,
- min_chunk_size: 0x10000,
- alignment: 64, // Vulkan `optimalBufferCopyOffsetAlignment` on Intel XE
- }),
- storage: BladeAtlasStorage::default(),
- tiles_by_key: Default::default(),
- initializations: Vec::new(),
- uploads: Vec::new(),
- }))
- }
-
- pub(crate) fn destroy(&self) {
- self.0.lock().destroy();
- }
-
- pub fn before_frame(&self, gpu_encoder: &mut gpu::CommandEncoder) {
- let mut lock = self.0.lock();
- lock.flush(gpu_encoder);
- }
-
- pub fn after_frame(&self, sync_point: &gpu::SyncPoint) {
- let mut lock = self.0.lock();
- lock.upload_belt.flush(sync_point);
- }
-
- pub fn get_texture_info(&self, id: AtlasTextureId) -> BladeTextureInfo {
- let lock = self.0.lock();
- let texture = &lock.storage[id];
- BladeTextureInfo {
- raw_view: texture.raw_view,
- }
- }
-}
-
-impl PlatformAtlas for BladeAtlas {
- fn get_or_insert_with<'a>(
- &self,
- key: &AtlasKey,
- build: &mut dyn FnMut() -> Result<Option<(Size<DevicePixels>, Cow<'a, [u8]>)>>,
- ) -> Result<Option<AtlasTile>> {
- let mut lock = self.0.lock();
- if let Some(tile) = lock.tiles_by_key.get(key) {
- Ok(Some(tile.clone()))
- } else {
- profiling::scope!("new tile");
- let Some((size, bytes)) = build()? else {
- return Ok(None);
- };
- let tile = lock.allocate(size, key.texture_kind());
- lock.upload_texture(tile.texture_id, tile.bounds, &bytes);
- lock.tiles_by_key.insert(key.clone(), tile.clone());
- Ok(Some(tile))
- }
- }
-
- fn remove(&self, key: &AtlasKey) {
- let mut lock = self.0.lock();
-
- let Some(id) = lock.tiles_by_key.remove(key).map(|tile| tile.texture_id) else {
- return;
- };
-
- let Some(texture_slot) = lock.storage[id.kind].textures.get_mut(id.index as usize) else {
- return;
- };
-
- if let Some(mut texture) = texture_slot.take() {
- texture.decrement_ref_count();
- if texture.is_unreferenced() {
- lock.storage[id.kind]
- .free_list
- .push(texture.id.index as usize);
- texture.destroy(&lock.gpu);
- } else {
- *texture_slot = Some(texture);
- }
- }
- }
-}
-
-impl BladeAtlasState {
- fn allocate(&mut self, size: Size<DevicePixels>, texture_kind: AtlasTextureKind) -> AtlasTile {
- {
- let textures = &mut self.storage[texture_kind];
-
- if let Some(tile) = textures
- .iter_mut()
- .rev()
- .find_map(|texture| texture.allocate(size))
- {
- return tile;
- }
- }
-
- let texture = self.push_texture(size, texture_kind);
- texture.allocate(size).unwrap()
- }
-
- fn push_texture(
- &mut self,
- min_size: Size<DevicePixels>,
- kind: AtlasTextureKind,
- ) -> &mut BladeAtlasTexture {
- const DEFAULT_ATLAS_SIZE: Size<DevicePixels> = Size {
- width: DevicePixels(1024),
- height: DevicePixels(1024),
- };
-
- let size = min_size.max(&DEFAULT_ATLAS_SIZE);
- let format;
- let usage;
- match kind {
- AtlasTextureKind::Monochrome => {
- format = gpu::TextureFormat::R8Unorm;
- usage = gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE;
- }
- AtlasTextureKind::Subpixel => {
- format = gpu::TextureFormat::Bgra8Unorm;
- usage = gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE;
- }
- AtlasTextureKind::Polychrome => {
- format = gpu::TextureFormat::Bgra8Unorm;
- usage = gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE;
- }
- }
-
- let raw = self.gpu.create_texture(gpu::TextureDesc {
- name: "atlas",
- format,
- size: gpu::Extent {
- width: size.width.into(),
- height: size.height.into(),
- depth: 1,
- },
- array_layer_count: 1,
- mip_level_count: 1,
- sample_count: 1,
- dimension: gpu::TextureDimension::D2,
- usage,
- external: None,
- });
- let raw_view = self.gpu.create_texture_view(
- raw,
- gpu::TextureViewDesc {
- name: "",
- format,
- dimension: gpu::ViewDimension::D2,
- subresources: &Default::default(),
- },
- );
-
- let texture_list = &mut self.storage[kind];
- let index = texture_list.free_list.pop();
-
- let atlas_texture = BladeAtlasTexture {
- id: AtlasTextureId {
- index: index.unwrap_or(texture_list.textures.len()) as u32,
- kind,
- },
- allocator: etagere::BucketedAtlasAllocator::new(size.into()),
- format,
- raw,
- raw_view,
- live_atlas_keys: 0,
- };
-
- self.initializations.push(atlas_texture.id);
-
- if let Some(ix) = index {
- texture_list.textures[ix] = Some(atlas_texture);
- texture_list.textures.get_mut(ix).unwrap().as_mut().unwrap()
- } else {
- texture_list.textures.push(Some(atlas_texture));
- texture_list.textures.last_mut().unwrap().as_mut().unwrap()
- }
- }
-
- fn upload_texture(&mut self, id: AtlasTextureId, bounds: Bounds<DevicePixels>, bytes: &[u8]) {
- let data = self.upload_belt.alloc_bytes(bytes, &self.gpu);
- self.uploads.push(PendingUpload { id, bounds, data });
- }
-
- fn flush_initializations(&mut self, encoder: &mut gpu::CommandEncoder) {
- for id in self.initializations.drain(..) {
- let texture = &self.storage[id];
- encoder.init_texture(texture.raw);
- }
- }
-
- fn flush(&mut self, encoder: &mut gpu::CommandEncoder) {
- self.flush_initializations(encoder);
-
- let mut transfers = encoder.transfer("atlas");
- for upload in self.uploads.drain(..) {
- let texture = &self.storage[upload.id];
- transfers.copy_buffer_to_texture(
- upload.data,
- upload.bounds.size.width.to_bytes(texture.bytes_per_pixel()),
- gpu::TexturePiece {
- texture: texture.raw,
- mip_level: 0,
- array_layer: 0,
- origin: [
- upload.bounds.origin.x.into(),
- upload.bounds.origin.y.into(),
- 0,
- ],
- },
- gpu::Extent {
- width: upload.bounds.size.width.into(),
- height: upload.bounds.size.height.into(),
- depth: 1,
- },
- );
- }
- }
-}
-
-#[derive(Default)]
-struct BladeAtlasStorage {
- monochrome_textures: AtlasTextureList<BladeAtlasTexture>,
- subpixel_textures: AtlasTextureList<BladeAtlasTexture>,
- polychrome_textures: AtlasTextureList<BladeAtlasTexture>,
-}
-
-impl ops::Index<AtlasTextureKind> for BladeAtlasStorage {
- type Output = AtlasTextureList<BladeAtlasTexture>;
- fn index(&self, kind: AtlasTextureKind) -> &Self::Output {
- match kind {
- crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
- crate::AtlasTextureKind::Subpixel => &self.subpixel_textures,
- crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
- }
- }
-}
-
-impl ops::IndexMut<AtlasTextureKind> for BladeAtlasStorage {
- fn index_mut(&mut self, kind: AtlasTextureKind) -> &mut Self::Output {
- match kind {
- crate::AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
- crate::AtlasTextureKind::Subpixel => &mut self.subpixel_textures,
- crate::AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
- }
- }
-}
-
-impl ops::Index<AtlasTextureId> for BladeAtlasStorage {
- type Output = BladeAtlasTexture;
- fn index(&self, id: AtlasTextureId) -> &Self::Output {
- let textures = match id.kind {
- crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
- crate::AtlasTextureKind::Subpixel => &self.subpixel_textures,
- crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
- };
- textures[id.index as usize].as_ref().unwrap()
- }
-}
-
-impl BladeAtlasStorage {
- fn destroy(&mut self, gpu: &gpu::Context) {
- for mut texture in self.monochrome_textures.drain().flatten() {
- texture.destroy(gpu);
- }
- for mut texture in self.subpixel_textures.drain().flatten() {
- texture.destroy(gpu);
- }
- for mut texture in self.polychrome_textures.drain().flatten() {
- texture.destroy(gpu);
- }
- }
-}
-
-struct BladeAtlasTexture {
- id: AtlasTextureId,
- allocator: BucketedAtlasAllocator,
- raw: gpu::Texture,
- raw_view: gpu::TextureView,
- format: gpu::TextureFormat,
- live_atlas_keys: u32,
-}
-
-impl BladeAtlasTexture {
- fn allocate(&mut self, size: Size<DevicePixels>) -> Option<AtlasTile> {
- let allocation = self.allocator.allocate(size.into())?;
- let tile = AtlasTile {
- texture_id: self.id,
- tile_id: allocation.id.into(),
- padding: 0,
- bounds: Bounds {
- origin: allocation.rectangle.min.into(),
- size,
- },
- };
- self.live_atlas_keys += 1;
- Some(tile)
- }
-
- fn destroy(&mut self, gpu: &gpu::Context) {
- gpu.destroy_texture(self.raw);
- gpu.destroy_texture_view(self.raw_view);
- }
-
- fn bytes_per_pixel(&self) -> u8 {
- self.format.block_info().size
- }
-
- fn decrement_ref_count(&mut self) {
- self.live_atlas_keys -= 1;
- }
-
- fn is_unreferenced(&mut self) -> bool {
- self.live_atlas_keys == 0
- }
-}
-
-impl From<Size<DevicePixels>> for etagere::Size {
- fn from(size: Size<DevicePixels>) -> Self {
- etagere::Size::new(size.width.into(), size.height.into())
- }
-}
-
-impl From<etagere::Point> for Point<DevicePixels> {
- fn from(value: etagere::Point) -> Self {
- Point {
- x: DevicePixels::from(value.x),
- y: DevicePixels::from(value.y),
- }
- }
-}
-
-impl From<etagere::Size> for Size<DevicePixels> {
- fn from(size: etagere::Size) -> Self {
- Size {
- width: DevicePixels::from(size.width),
- height: DevicePixels::from(size.height),
- }
- }
-}
-
-impl From<etagere::Rectangle> for Bounds<DevicePixels> {
- fn from(rectangle: etagere::Rectangle) -> Self {
- Bounds {
- origin: rectangle.min.into(),
- size: rectangle.size().into(),
- }
- }
-}
@@ -1,85 +0,0 @@
-use anyhow::Context as _;
-use blade_graphics as gpu;
-use std::sync::Arc;
-use util::ResultExt;
-
-#[cfg_attr(target_os = "macos", derive(Clone))]
-pub struct BladeContext {
- pub(super) gpu: Arc<gpu::Context>,
-}
-
-impl BladeContext {
- pub fn new() -> anyhow::Result<Self> {
- let device_id_forced = match std::env::var("ZED_DEVICE_ID") {
- Ok(val) => parse_pci_id(&val)
- .context("Failed to parse device ID from `ZED_DEVICE_ID` environment variable")
- .log_err(),
- Err(std::env::VarError::NotPresent) => None,
- err => {
- err.context("Failed to read value of `ZED_DEVICE_ID` environment variable")
- .log_err();
- None
- }
- };
- let gpu = Arc::new(
- unsafe {
- gpu::Context::init(gpu::ContextDesc {
- presentation: true,
- validation: false,
- device_id: device_id_forced.unwrap_or(0),
- ..Default::default()
- })
- }
- .map_err(|e| anyhow::anyhow!("{e:?}"))?,
- );
- Ok(Self { gpu })
- }
-
- #[allow(dead_code)]
- pub fn supports_dual_source_blending(&self) -> bool {
- self.gpu.capabilities().dual_source_blending
- }
-}
-
-fn parse_pci_id(id: &str) -> anyhow::Result<u32> {
- let mut id = id.trim();
-
- if id.starts_with("0x") || id.starts_with("0X") {
- id = &id[2..];
- }
- let is_hex_string = id.chars().all(|c| c.is_ascii_hexdigit());
- let is_4_chars = id.len() == 4;
- anyhow::ensure!(
- is_4_chars && is_hex_string,
- "Expected a 4 digit PCI ID in hexadecimal format"
- );
-
- u32::from_str_radix(id, 16).context("parsing PCI ID as hex")
-}
-
-#[cfg(test)]
-mod tests {
- use super::parse_pci_id;
-
- #[test]
- fn test_parse_device_id() {
- assert!(parse_pci_id("0xABCD").is_ok());
- assert!(parse_pci_id("ABCD").is_ok());
- assert!(parse_pci_id("abcd").is_ok());
- assert!(parse_pci_id("1234").is_ok());
- assert!(parse_pci_id("123").is_err());
- assert_eq!(
- parse_pci_id(&format!("{:x}", 0x1234)).unwrap(),
- parse_pci_id(&format!("{:X}", 0x1234)).unwrap(),
- );
-
- assert_eq!(
- parse_pci_id(&format!("{:#x}", 0x1234)).unwrap(),
- parse_pci_id(&format!("{:#X}", 0x1234)).unwrap(),
- );
- assert_eq!(
- parse_pci_id(&format!("{:#x}", 0x1234)).unwrap(),
- parse_pci_id(&format!("{:#X}", 0x1234)).unwrap(),
- );
- }
-}
@@ -1,1121 +0,0 @@
-// Doing `if let` gives you nice scoping with passes/encoders
-#![allow(irrefutable_let_patterns)]
-
-use super::{BladeAtlas, BladeContext};
-use crate::{
- Background, Bounds, DevicePixels, GpuSpecs, MonochromeSprite, Path, Point, PolychromeSprite,
- PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size, Underline,
- get_gamma_correction_ratios,
-};
-#[cfg(any(test, feature = "test-support"))]
-use anyhow::Result;
-use blade_graphics as gpu;
-use blade_util::{BufferBelt, BufferBeltDescriptor};
-use bytemuck::{Pod, Zeroable};
-#[cfg(any(test, feature = "test-support"))]
-use image::RgbaImage;
-#[cfg(target_os = "macos")]
-use media::core_video::CVMetalTextureCache;
-use std::sync::Arc;
-
-const MAX_FRAME_TIME_MS: u32 = 10000;
-
-#[repr(C)]
-#[derive(Clone, Copy, Pod, Zeroable)]
-struct GlobalParams {
- viewport_size: [f32; 2],
- premultiplied_alpha: u32,
- pad: u32,
-}
-
-//Note: we can't use `Bounds` directly here because
-// it doesn't implement Pod + Zeroable
-#[repr(C)]
-#[derive(Clone, Copy, Pod, Zeroable)]
-struct PodBounds {
- origin: [f32; 2],
- size: [f32; 2],
-}
-
-impl From<Bounds<ScaledPixels>> for PodBounds {
- fn from(bounds: Bounds<ScaledPixels>) -> Self {
- Self {
- origin: [bounds.origin.x.0, bounds.origin.y.0],
- size: [bounds.size.width.0, bounds.size.height.0],
- }
- }
-}
-
-#[repr(C)]
-#[derive(Clone, Copy, Pod, Zeroable)]
-struct SurfaceParams {
- bounds: PodBounds,
- content_mask: PodBounds,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderQuadsData {
- globals: GlobalParams,
- b_quads: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderShadowsData {
- globals: GlobalParams,
- b_shadows: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderPathRasterizationData {
- globals: GlobalParams,
- b_path_vertices: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderPathsData {
- globals: GlobalParams,
- t_sprite: gpu::TextureView,
- s_sprite: gpu::Sampler,
- b_path_sprites: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderUnderlinesData {
- globals: GlobalParams,
- b_underlines: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderMonoSpritesData {
- globals: GlobalParams,
- gamma_ratios: [f32; 4],
- grayscale_enhanced_contrast: f32,
- t_sprite: gpu::TextureView,
- s_sprite: gpu::Sampler,
- b_mono_sprites: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderSubpixelSpritesData {
- globals: GlobalParams,
- gamma_ratios: [f32; 4],
- subpixel_enhanced_contrast: f32,
- t_sprite: gpu::TextureView,
- s_sprite: gpu::Sampler,
- b_subpixel_sprites: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderPolySpritesData {
- globals: GlobalParams,
- t_sprite: gpu::TextureView,
- s_sprite: gpu::Sampler,
- b_poly_sprites: gpu::BufferPiece,
-}
-
-#[derive(blade_macros::ShaderData)]
-struct ShaderSurfacesData {
- globals: GlobalParams,
- surface_locals: SurfaceParams,
- t_y: gpu::TextureView,
- t_cb_cr: gpu::TextureView,
- s_surface: gpu::Sampler,
-}
-
-#[derive(Clone, Debug, Eq, PartialEq)]
-#[repr(C)]
-struct PathSprite {
- bounds: Bounds<ScaledPixels>,
-}
-
-#[derive(Clone, Debug)]
-#[repr(C)]
-struct PathRasterizationVertex {
- xy_position: Point<ScaledPixels>,
- st_position: Point<f32>,
- color: Background,
- bounds: Bounds<ScaledPixels>,
-}
-
-struct BladePipelines {
- quads: gpu::RenderPipeline,
- shadows: gpu::RenderPipeline,
- path_rasterization: gpu::RenderPipeline,
- paths: gpu::RenderPipeline,
- underlines: gpu::RenderPipeline,
- mono_sprites: gpu::RenderPipeline,
- subpixel_sprites: gpu::RenderPipeline,
- poly_sprites: gpu::RenderPipeline,
- surfaces: gpu::RenderPipeline,
-}
-
-impl BladePipelines {
- fn new(gpu: &gpu::Context, surface_info: gpu::SurfaceInfo, path_sample_count: u32) -> Self {
- use gpu::ShaderData as _;
-
- log::info!(
- "Initializing Blade pipelines for surface {:?}",
- surface_info
- );
- let shader = gpu.create_shader(gpu::ShaderDesc {
- source: include_str!("shaders.wgsl"),
- });
- shader.check_struct_size::<GlobalParams>();
- shader.check_struct_size::<SurfaceParams>();
- shader.check_struct_size::<Quad>();
- shader.check_struct_size::<Shadow>();
- shader.check_struct_size::<PathRasterizationVertex>();
- shader.check_struct_size::<PathSprite>();
- shader.check_struct_size::<Underline>();
- shader.check_struct_size::<MonochromeSprite>();
- shader.check_struct_size::<PolychromeSprite>();
-
- // See https://apoorvaj.io/alpha-compositing-opengl-blending-and-premultiplied-alpha/
- let blend_mode = match surface_info.alpha {
- gpu::AlphaMode::Ignored => gpu::BlendState::ALPHA_BLENDING,
- gpu::AlphaMode::PreMultiplied => gpu::BlendState::PREMULTIPLIED_ALPHA_BLENDING,
- gpu::AlphaMode::PostMultiplied => gpu::BlendState::ALPHA_BLENDING,
- };
- let color_targets = &[gpu::ColorTargetState {
- format: surface_info.format,
- blend: Some(blend_mode),
- write_mask: gpu::ColorWrites::default(),
- }];
-
- Self {
- quads: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "quads",
- data_layouts: &[&ShaderQuadsData::layout()],
- vertex: shader.at("vs_quad"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_quad")),
- color_targets,
- multisample_state: gpu::MultisampleState::default(),
- }),
- shadows: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "shadows",
- data_layouts: &[&ShaderShadowsData::layout()],
- vertex: shader.at("vs_shadow"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_shadow")),
- color_targets,
- multisample_state: gpu::MultisampleState::default(),
- }),
- path_rasterization: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "path_rasterization",
- data_layouts: &[&ShaderPathRasterizationData::layout()],
- vertex: shader.at("vs_path_rasterization"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleList,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_path_rasterization")),
- // The original implementation was using ADDITIVE blende mode,
- // I don't know why
- // color_targets: &[gpu::ColorTargetState {
- // format: PATH_TEXTURE_FORMAT,
- // blend: Some(gpu::BlendState::ADDITIVE),
- // write_mask: gpu::ColorWrites::default(),
- // }],
- color_targets: &[gpu::ColorTargetState {
- format: surface_info.format,
- blend: Some(gpu::BlendState::PREMULTIPLIED_ALPHA_BLENDING),
- write_mask: gpu::ColorWrites::default(),
- }],
- multisample_state: gpu::MultisampleState {
- sample_count: path_sample_count,
- ..Default::default()
- },
- }),
- paths: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "paths",
- data_layouts: &[&ShaderPathsData::layout()],
- vertex: shader.at("vs_path"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_path")),
- color_targets: &[gpu::ColorTargetState {
- format: surface_info.format,
- blend: Some(gpu::BlendState {
- color: gpu::BlendComponent::OVER,
- alpha: gpu::BlendComponent::ADDITIVE,
- }),
- write_mask: gpu::ColorWrites::default(),
- }],
- multisample_state: gpu::MultisampleState::default(),
- }),
- underlines: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "underlines",
- data_layouts: &[&ShaderUnderlinesData::layout()],
- vertex: shader.at("vs_underline"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_underline")),
- color_targets,
- multisample_state: gpu::MultisampleState::default(),
- }),
- mono_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "mono-sprites",
- data_layouts: &[&ShaderMonoSpritesData::layout()],
- vertex: shader.at("vs_mono_sprite"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_mono_sprite")),
- color_targets,
- multisample_state: gpu::MultisampleState::default(),
- }),
- subpixel_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "subpixel-sprites",
- data_layouts: &[&ShaderSubpixelSpritesData::layout()],
- vertex: shader.at("vs_subpixel_sprite"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_subpixel_sprite")),
- color_targets: &[gpu::ColorTargetState {
- format: surface_info.format,
- blend: Some(gpu::BlendState {
- color: gpu::BlendComponent {
- src_factor: gpu::BlendFactor::Src1,
- dst_factor: gpu::BlendFactor::OneMinusSrc1,
- operation: gpu::BlendOperation::Add,
- },
- alpha: gpu::BlendComponent::OVER,
- }),
- write_mask: gpu::ColorWrites::COLOR,
- }],
- multisample_state: gpu::MultisampleState::default(),
- }),
- poly_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "poly-sprites",
- data_layouts: &[&ShaderPolySpritesData::layout()],
- vertex: shader.at("vs_poly_sprite"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_poly_sprite")),
- color_targets,
- multisample_state: gpu::MultisampleState::default(),
- }),
- surfaces: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
- name: "surfaces",
- data_layouts: &[&ShaderSurfacesData::layout()],
- vertex: shader.at("vs_surface"),
- vertex_fetches: &[],
- primitive: gpu::PrimitiveState {
- topology: gpu::PrimitiveTopology::TriangleStrip,
- ..Default::default()
- },
- depth_stencil: None,
- fragment: Some(shader.at("fs_surface")),
- color_targets,
- multisample_state: gpu::MultisampleState::default(),
- }),
- }
- }
-
- fn destroy(&mut self, gpu: &gpu::Context) {
- gpu.destroy_render_pipeline(&mut self.quads);
- gpu.destroy_render_pipeline(&mut self.shadows);
- gpu.destroy_render_pipeline(&mut self.path_rasterization);
- gpu.destroy_render_pipeline(&mut self.paths);
- gpu.destroy_render_pipeline(&mut self.underlines);
- gpu.destroy_render_pipeline(&mut self.mono_sprites);
- gpu.destroy_render_pipeline(&mut self.subpixel_sprites);
- gpu.destroy_render_pipeline(&mut self.poly_sprites);
- gpu.destroy_render_pipeline(&mut self.surfaces);
- }
-}
-
-pub struct BladeSurfaceConfig {
- pub size: gpu::Extent,
- pub transparent: bool,
-}
-
-//Note: we could see some of these fields moved into `BladeContext`
-// so that they are shared between windows. E.g. `pipelines`.
-// But that is complicated by the fact that pipelines depend on
-// the format and alpha mode.
-pub struct BladeRenderer {
- gpu: Arc<gpu::Context>,
- surface: gpu::Surface,
- surface_config: gpu::SurfaceConfig,
- command_encoder: gpu::CommandEncoder,
- last_sync_point: Option<gpu::SyncPoint>,
- pipelines: BladePipelines,
- instance_belt: BufferBelt,
- atlas: Arc<BladeAtlas>,
- atlas_sampler: gpu::Sampler,
- #[cfg(target_os = "macos")]
- core_video_texture_cache: CVMetalTextureCache,
- path_intermediate_texture: gpu::Texture,
- path_intermediate_texture_view: gpu::TextureView,
- path_intermediate_msaa_texture: Option<gpu::Texture>,
- path_intermediate_msaa_texture_view: Option<gpu::TextureView>,
- rendering_parameters: RenderingParameters,
-}
-
-impl BladeRenderer {
- pub fn new<I: raw_window_handle::HasWindowHandle + raw_window_handle::HasDisplayHandle>(
- context: &BladeContext,
- window: &I,
- config: BladeSurfaceConfig,
- ) -> anyhow::Result<Self> {
- let surface_config = gpu::SurfaceConfig {
- size: config.size,
- usage: gpu::TextureUsage::TARGET,
- display_sync: gpu::DisplaySync::Recent,
- color_space: gpu::ColorSpace::Srgb,
- allow_exclusive_full_screen: false,
- transparent: config.transparent,
- };
- let surface = context
- .gpu
- .create_surface_configured(window, surface_config)
- .map_err(|err| anyhow::anyhow!("Failed to create surface: {err:?}"))?;
-
- let command_encoder = context.gpu.create_command_encoder(gpu::CommandEncoderDesc {
- name: "main",
- buffer_count: 2,
- });
- let rendering_parameters = RenderingParameters::from_env(context);
- let pipelines = BladePipelines::new(
- &context.gpu,
- surface.info(),
- rendering_parameters.path_sample_count,
- );
- let instance_belt = BufferBelt::new(BufferBeltDescriptor {
- memory: gpu::Memory::Shared,
- min_chunk_size: 0x1000,
- alignment: 0x40, // Vulkan `minStorageBufferOffsetAlignment` on Intel Xe
- });
- let atlas = Arc::new(BladeAtlas::new(&context.gpu));
- let atlas_sampler = context.gpu.create_sampler(gpu::SamplerDesc {
- name: "path rasterization sampler",
- mag_filter: gpu::FilterMode::Linear,
- min_filter: gpu::FilterMode::Linear,
- ..Default::default()
- });
-
- let (path_intermediate_texture, path_intermediate_texture_view) =
- create_path_intermediate_texture(
- &context.gpu,
- surface.info().format,
- config.size.width,
- config.size.height,
- );
- let (path_intermediate_msaa_texture, path_intermediate_msaa_texture_view) =
- create_msaa_texture_if_needed(
- &context.gpu,
- surface.info().format,
- config.size.width,
- config.size.height,
- rendering_parameters.path_sample_count,
- )
- .unzip();
-
- #[cfg(target_os = "macos")]
- let core_video_texture_cache = unsafe {
- CVMetalTextureCache::new(
- objc2::rc::Retained::as_ptr(&context.gpu.metal_device()) as *mut _
- )
- .unwrap()
- };
-
- Ok(Self {
- gpu: Arc::clone(&context.gpu),
- surface,
- surface_config,
- command_encoder,
- last_sync_point: None,
- pipelines,
- instance_belt,
- atlas,
- atlas_sampler,
- #[cfg(target_os = "macos")]
- core_video_texture_cache,
- path_intermediate_texture,
- path_intermediate_texture_view,
- path_intermediate_msaa_texture,
- path_intermediate_msaa_texture_view,
- rendering_parameters,
- })
- }
-
- fn wait_for_gpu(&mut self) {
- if let Some(last_sp) = self.last_sync_point.take()
- && !self.gpu.wait_for(&last_sp, MAX_FRAME_TIME_MS)
- {
- log::error!("GPU hung");
- #[cfg(target_os = "linux")]
- if self.gpu.device_information().driver_name == "radv" {
- log::error!(
- "there's a known bug with amdgpu/radv, try setting ZED_PATH_SAMPLE_COUNT=0 as a workaround"
- );
- log::error!(
- "if that helps you're running into https://github.com/zed-industries/zed/issues/26143"
- );
- }
- log::error!(
- "your device information is: {:?}",
- self.gpu.device_information()
- );
- while !self.gpu.wait_for(&last_sp, MAX_FRAME_TIME_MS) {}
- }
- }
-
- pub fn update_drawable_size(&mut self, size: Size<DevicePixels>) {
- self.update_drawable_size_impl(size, false);
- }
-
- /// Like `update_drawable_size` but skips the check that the size has changed. This is useful in
- /// cases like restoring a window from minimization where the size is the same but the
- /// renderer's swap chain needs to be recreated.
- #[cfg_attr(
- any(target_os = "macos", target_os = "linux", target_os = "freebsd"),
- allow(dead_code)
- )]
- pub fn update_drawable_size_even_if_unchanged(&mut self, size: Size<DevicePixels>) {
- self.update_drawable_size_impl(size, true);
- }
-
- fn update_drawable_size_impl(&mut self, size: Size<DevicePixels>, always_resize: bool) {
- let gpu_size = gpu::Extent {
- width: size.width.0 as u32,
- height: size.height.0 as u32,
- depth: 1,
- };
-
- if always_resize || gpu_size != self.surface_config.size {
- self.wait_for_gpu();
- self.surface_config.size = gpu_size;
- self.gpu
- .reconfigure_surface(&mut self.surface, self.surface_config);
- self.gpu.destroy_texture(self.path_intermediate_texture);
- self.gpu
- .destroy_texture_view(self.path_intermediate_texture_view);
- if let Some(msaa_texture) = self.path_intermediate_msaa_texture {
- self.gpu.destroy_texture(msaa_texture);
- }
- if let Some(msaa_view) = self.path_intermediate_msaa_texture_view {
- self.gpu.destroy_texture_view(msaa_view);
- }
- let (path_intermediate_texture, path_intermediate_texture_view) =
- create_path_intermediate_texture(
- &self.gpu,
- self.surface.info().format,
- gpu_size.width,
- gpu_size.height,
- );
- self.path_intermediate_texture = path_intermediate_texture;
- self.path_intermediate_texture_view = path_intermediate_texture_view;
- let (path_intermediate_msaa_texture, path_intermediate_msaa_texture_view) =
- create_msaa_texture_if_needed(
- &self.gpu,
- self.surface.info().format,
- gpu_size.width,
- gpu_size.height,
- self.rendering_parameters.path_sample_count,
- )
- .unzip();
- self.path_intermediate_msaa_texture = path_intermediate_msaa_texture;
- self.path_intermediate_msaa_texture_view = path_intermediate_msaa_texture_view;
- }
- }
-
- pub fn update_transparency(&mut self, transparent: bool) {
- if transparent != self.surface_config.transparent {
- self.wait_for_gpu();
- self.surface_config.transparent = transparent;
- self.gpu
- .reconfigure_surface(&mut self.surface, self.surface_config);
- self.pipelines.destroy(&self.gpu);
- self.pipelines = BladePipelines::new(
- &self.gpu,
- self.surface.info(),
- self.rendering_parameters.path_sample_count,
- );
- }
- }
-
- #[cfg_attr(
- any(target_os = "macos", feature = "wayland", target_os = "windows"),
- allow(dead_code)
- )]
- pub fn viewport_size(&self) -> gpu::Extent {
- self.surface_config.size
- }
-
- pub fn sprite_atlas(&self) -> &Arc<BladeAtlas> {
- &self.atlas
- }
-
- #[cfg_attr(target_os = "macos", allow(dead_code))]
- pub fn gpu_specs(&self) -> GpuSpecs {
- let info = self.gpu.device_information();
-
- GpuSpecs {
- is_software_emulated: info.is_software_emulated,
- device_name: info.device_name.clone(),
- driver_name: info.driver_name.clone(),
- driver_info: info.driver_info.clone(),
- }
- }
-
- #[cfg(target_os = "macos")]
- pub fn layer(&self) -> metal::MetalLayer {
- unsafe { foreign_types::ForeignType::from_ptr(self.layer_ptr()) }
- }
-
- #[cfg(target_os = "macos")]
- pub fn layer_ptr(&self) -> *mut metal::CAMetalLayer {
- objc2::rc::Retained::as_ptr(&self.surface.metal_layer()) as *mut _
- }
-
- #[profiling::function]
- fn draw_paths_to_intermediate(
- &mut self,
- paths: &[Path<ScaledPixels>],
- width: f32,
- height: f32,
- ) {
- self.command_encoder
- .init_texture(self.path_intermediate_texture);
- if let Some(msaa_texture) = self.path_intermediate_msaa_texture {
- self.command_encoder.init_texture(msaa_texture);
- }
-
- let target = if let Some(msaa_view) = self.path_intermediate_msaa_texture_view {
- gpu::RenderTarget {
- view: msaa_view,
- init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
- finish_op: gpu::FinishOp::ResolveTo(self.path_intermediate_texture_view),
- }
- } else {
- gpu::RenderTarget {
- view: self.path_intermediate_texture_view,
- init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
- finish_op: gpu::FinishOp::Store,
- }
- };
- if let mut pass = self.command_encoder.render(
- "rasterize paths",
- gpu::RenderTargetSet {
- colors: &[target],
- depth_stencil: None,
- },
- ) {
- let globals = GlobalParams {
- viewport_size: [width, height],
- premultiplied_alpha: 0,
- pad: 0,
- };
- let mut encoder = pass.with(&self.pipelines.path_rasterization);
-
- let mut vertices = Vec::new();
- for path in paths {
- vertices.extend(path.vertices.iter().map(|v| PathRasterizationVertex {
- xy_position: v.xy_position,
- st_position: v.st_position,
- color: path.color,
- bounds: path.clipped_bounds(),
- }));
- }
- let vertex_buf = unsafe { self.instance_belt.alloc_typed(&vertices, &self.gpu) };
- encoder.bind(
- 0,
- &ShaderPathRasterizationData {
- globals,
- b_path_vertices: vertex_buf,
- },
- );
- encoder.draw(0, vertices.len() as u32, 0, 1);
- }
- }
-
- pub fn destroy(&mut self) {
- self.wait_for_gpu();
- self.atlas.destroy();
- self.gpu.destroy_sampler(self.atlas_sampler);
- self.instance_belt.destroy(&self.gpu);
- self.gpu.destroy_command_encoder(&mut self.command_encoder);
- self.pipelines.destroy(&self.gpu);
- self.gpu.destroy_surface(&mut self.surface);
- self.gpu.destroy_texture(self.path_intermediate_texture);
- self.gpu
- .destroy_texture_view(self.path_intermediate_texture_view);
- if let Some(msaa_texture) = self.path_intermediate_msaa_texture {
- self.gpu.destroy_texture(msaa_texture);
- }
- if let Some(msaa_view) = self.path_intermediate_msaa_texture_view {
- self.gpu.destroy_texture_view(msaa_view);
- }
- }
-
- pub fn draw(&mut self, scene: &Scene) {
- self.command_encoder.start();
- self.atlas.before_frame(&mut self.command_encoder);
-
- let frame = {
- profiling::scope!("acquire frame");
- self.surface.acquire_frame()
- };
- self.command_encoder.init_texture(frame.texture());
-
- let globals = GlobalParams {
- viewport_size: [
- self.surface_config.size.width as f32,
- self.surface_config.size.height as f32,
- ],
- premultiplied_alpha: match self.surface.info().alpha {
- gpu::AlphaMode::Ignored | gpu::AlphaMode::PostMultiplied => 0,
- gpu::AlphaMode::PreMultiplied => 1,
- },
- pad: 0,
- };
-
- let mut pass = self.command_encoder.render(
- "main",
- gpu::RenderTargetSet {
- colors: &[gpu::RenderTarget {
- view: frame.texture_view(),
- init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
- finish_op: gpu::FinishOp::Store,
- }],
- depth_stencil: None,
- },
- );
-
- profiling::scope!("render pass");
- for batch in scene.batches() {
- match batch {
- PrimitiveBatch::Quads(range) => {
- let quads = &scene.quads[range];
- let instance_buf = unsafe { self.instance_belt.alloc_typed(quads, &self.gpu) };
- let mut encoder = pass.with(&self.pipelines.quads);
- encoder.bind(
- 0,
- &ShaderQuadsData {
- globals,
- b_quads: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, quads.len() as u32);
- }
- PrimitiveBatch::Shadows(range) => {
- let shadows = &scene.shadows[range];
- let instance_buf =
- unsafe { self.instance_belt.alloc_typed(shadows, &self.gpu) };
- let mut encoder = pass.with(&self.pipelines.shadows);
- encoder.bind(
- 0,
- &ShaderShadowsData {
- globals,
- b_shadows: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, shadows.len() as u32);
- }
- PrimitiveBatch::Paths(range) => {
- let paths = &scene.paths[range];
- let Some(first_path) = paths.first() else {
- continue;
- };
- drop(pass);
- self.draw_paths_to_intermediate(
- paths,
- self.surface_config.size.width as f32,
- self.surface_config.size.height as f32,
- );
- pass = self.command_encoder.render(
- "main",
- gpu::RenderTargetSet {
- colors: &[gpu::RenderTarget {
- view: frame.texture_view(),
- init_op: gpu::InitOp::Load,
- finish_op: gpu::FinishOp::Store,
- }],
- depth_stencil: None,
- },
- );
- let mut encoder = pass.with(&self.pipelines.paths);
- // When copying paths from the intermediate texture to the drawable,
- // each pixel must only be copied once, in case of transparent paths.
- //
- // If all paths have the same draw order, then their bounds are all
- // disjoint, so we can copy each path's bounds individually. If this
- // batch combines different draw orders, we perform a single copy
- // for a minimal spanning rect.
- let sprites = if paths.last().unwrap().order == first_path.order {
- paths
- .iter()
- .map(|path| PathSprite {
- bounds: path.clipped_bounds(),
- })
- .collect()
- } else {
- let mut bounds = first_path.clipped_bounds();
- for path in paths.iter().skip(1) {
- bounds = bounds.union(&path.clipped_bounds());
- }
- vec![PathSprite { bounds }]
- };
- let instance_buf =
- unsafe { self.instance_belt.alloc_typed(&sprites, &self.gpu) };
- encoder.bind(
- 0,
- &ShaderPathsData {
- globals,
- t_sprite: self.path_intermediate_texture_view,
- s_sprite: self.atlas_sampler,
- b_path_sprites: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, sprites.len() as u32);
- }
- PrimitiveBatch::Underlines(range) => {
- let underlines = &scene.underlines[range];
- let instance_buf =
- unsafe { self.instance_belt.alloc_typed(underlines, &self.gpu) };
- let mut encoder = pass.with(&self.pipelines.underlines);
- encoder.bind(
- 0,
- &ShaderUnderlinesData {
- globals,
- b_underlines: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, underlines.len() as u32);
- }
- PrimitiveBatch::MonochromeSprites { texture_id, range } => {
- let sprites = &scene.monochrome_sprites[range];
- let tex_info = self.atlas.get_texture_info(texture_id);
- let instance_buf =
- unsafe { self.instance_belt.alloc_typed(sprites, &self.gpu) };
- let mut encoder = pass.with(&self.pipelines.mono_sprites);
- encoder.bind(
- 0,
- &ShaderMonoSpritesData {
- globals,
- gamma_ratios: self.rendering_parameters.gamma_ratios,
- grayscale_enhanced_contrast: self
- .rendering_parameters
- .grayscale_enhanced_contrast,
- t_sprite: tex_info.raw_view,
- s_sprite: self.atlas_sampler,
- b_mono_sprites: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, sprites.len() as u32);
- }
- PrimitiveBatch::PolychromeSprites { texture_id, range } => {
- let sprites = &scene.polychrome_sprites[range];
- let tex_info = self.atlas.get_texture_info(texture_id);
- let instance_buf =
- unsafe { self.instance_belt.alloc_typed(sprites, &self.gpu) };
- let mut encoder = pass.with(&self.pipelines.poly_sprites);
- encoder.bind(
- 0,
- &ShaderPolySpritesData {
- globals,
- t_sprite: tex_info.raw_view,
- s_sprite: self.atlas_sampler,
- b_poly_sprites: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, sprites.len() as u32);
- }
- PrimitiveBatch::SubpixelSprites { texture_id, range } => {
- let sprites = &scene.subpixel_sprites[range];
- let tex_info = self.atlas.get_texture_info(texture_id);
- let instance_buf =
- unsafe { self.instance_belt.alloc_typed(sprites, &self.gpu) };
- let mut encoder = pass.with(&self.pipelines.subpixel_sprites);
- encoder.bind(
- 0,
- &ShaderSubpixelSpritesData {
- globals,
- gamma_ratios: self.rendering_parameters.gamma_ratios,
- subpixel_enhanced_contrast: self
- .rendering_parameters
- .subpixel_enhanced_contrast,
- t_sprite: tex_info.raw_view,
- s_sprite: self.atlas_sampler,
- b_subpixel_sprites: instance_buf,
- },
- );
- encoder.draw(0, 4, 0, sprites.len() as u32);
- }
- PrimitiveBatch::Surfaces(range) => {
- let surfaces = &scene.surfaces[range];
- let mut _encoder = pass.with(&self.pipelines.surfaces);
-
- for surface in surfaces {
- #[cfg(not(target_os = "macos"))]
- {
- let _ = surface;
- continue;
- };
-
- #[cfg(target_os = "macos")]
- {
- let (t_y, t_cb_cr) = unsafe {
- use core_foundation::base::TCFType as _;
- use std::ptr;
-
- assert_eq!(
- surface.image_buffer.get_pixel_format(),
- core_video::pixel_buffer::kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
- );
-
- let y_texture = self
- .core_video_texture_cache
- .create_texture_from_image(
- surface.image_buffer.as_concrete_TypeRef(),
- ptr::null(),
- metal::MTLPixelFormat::R8Unorm,
- surface.image_buffer.get_width_of_plane(0),
- surface.image_buffer.get_height_of_plane(0),
- 0,
- )
- .unwrap();
- let cb_cr_texture = self
- .core_video_texture_cache
- .create_texture_from_image(
- surface.image_buffer.as_concrete_TypeRef(),
- ptr::null(),
- metal::MTLPixelFormat::RG8Unorm,
- surface.image_buffer.get_width_of_plane(1),
- surface.image_buffer.get_height_of_plane(1),
- 1,
- )
- .unwrap();
- (
- gpu::TextureView::from_metal_texture(
- &objc2::rc::Retained::retain(
- foreign_types::ForeignTypeRef::as_ptr(
- y_texture.as_texture_ref(),
- )
- as *mut objc2::runtime::ProtocolObject<
- dyn objc2_metal::MTLTexture,
- >,
- )
- .unwrap(),
- gpu::TexelAspects::COLOR,
- ),
- gpu::TextureView::from_metal_texture(
- &objc2::rc::Retained::retain(
- foreign_types::ForeignTypeRef::as_ptr(
- cb_cr_texture.as_texture_ref(),
- )
- as *mut objc2::runtime::ProtocolObject<
- dyn objc2_metal::MTLTexture,
- >,
- )
- .unwrap(),
- gpu::TexelAspects::COLOR,
- ),
- )
- };
-
- _encoder.bind(
- 0,
- &ShaderSurfacesData {
- globals,
- surface_locals: SurfaceParams {
- bounds: surface.bounds.into(),
- content_mask: surface.content_mask.bounds.into(),
- },
- t_y,
- t_cb_cr,
- s_surface: self.atlas_sampler,
- },
- );
-
- _encoder.draw(0, 4, 0, 1);
- }
- }
- }
- }
- }
- drop(pass);
-
- self.command_encoder.present(frame);
- let sync_point = self.gpu.submit(&mut self.command_encoder);
-
- profiling::scope!("finish");
- self.instance_belt.flush(&sync_point);
- self.atlas.after_frame(&sync_point);
-
- self.wait_for_gpu();
- self.last_sync_point = Some(sync_point);
- }
-
- /// Renders the scene to a texture and returns the pixel data as an RGBA image.
- /// This is not yet implemented for BladeRenderer.
- #[cfg(any(test, feature = "test-support"))]
- #[allow(dead_code)]
- pub fn render_to_image(&mut self, _scene: &Scene) -> Result<RgbaImage> {
- anyhow::bail!("render_to_image is not yet implemented for BladeRenderer")
- }
-}
-
-fn create_path_intermediate_texture(
- gpu: &gpu::Context,
- format: gpu::TextureFormat,
- width: u32,
- height: u32,
-) -> (gpu::Texture, gpu::TextureView) {
- let texture = gpu.create_texture(gpu::TextureDesc {
- name: "path intermediate",
- format,
- size: gpu::Extent {
- width,
- height,
- depth: 1,
- },
- array_layer_count: 1,
- mip_level_count: 1,
- sample_count: 1,
- dimension: gpu::TextureDimension::D2,
- usage: gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE | gpu::TextureUsage::TARGET,
- external: None,
- });
- let texture_view = gpu.create_texture_view(
- texture,
- gpu::TextureViewDesc {
- name: "path intermediate view",
- format,
- dimension: gpu::ViewDimension::D2,
- subresources: &Default::default(),
- },
- );
- (texture, texture_view)
-}
-
-fn create_msaa_texture_if_needed(
- gpu: &gpu::Context,
- format: gpu::TextureFormat,
- width: u32,
- height: u32,
- sample_count: u32,
-) -> Option<(gpu::Texture, gpu::TextureView)> {
- if sample_count <= 1 {
- return None;
- }
- let texture_msaa = gpu.create_texture(gpu::TextureDesc {
- name: "path intermediate msaa",
- format,
- size: gpu::Extent {
- width,
- height,
- depth: 1,
- },
- array_layer_count: 1,
- mip_level_count: 1,
- sample_count,
- dimension: gpu::TextureDimension::D2,
- usage: gpu::TextureUsage::TARGET,
- external: None,
- });
- let texture_view_msaa = gpu.create_texture_view(
- texture_msaa,
- gpu::TextureViewDesc {
- name: "path intermediate msaa view",
- format,
- dimension: gpu::ViewDimension::D2,
- subresources: &Default::default(),
- },
- );
-
- Some((texture_msaa, texture_view_msaa))
-}
-
-/// A set of parameters that can be set using a corresponding environment variable.
-struct RenderingParameters {
- // Env var: ZED_PATH_SAMPLE_COUNT
- // workaround for https://github.com/zed-industries/zed/issues/26143
- path_sample_count: u32,
-
- // Env var: ZED_FONTS_GAMMA
- // Allowed range [1.0, 2.2], other values are clipped
- // Default: 1.8
- gamma_ratios: [f32; 4],
- // Env var: ZED_FONTS_GRAYSCALE_ENHANCED_CONTRAST
- // Allowed range: [0.0, ..), other values are clipped
- // Default: 1.0
- grayscale_enhanced_contrast: f32,
- // Env var: ZED_FONTS_SUBPIXEL_ENHANCED_CONTRAST
- // Allowed range: [0.0, ..), other values are clipped
- // Default: 0.5
- subpixel_enhanced_contrast: f32,
-}
-
-impl RenderingParameters {
- fn from_env(context: &BladeContext) -> Self {
- use std::env;
-
- let path_sample_count = env::var("ZED_PATH_SAMPLE_COUNT")
- .ok()
- .and_then(|v| v.parse().ok())
- .or_else(|| {
- [4, 2, 1]
- .into_iter()
- .find(|&n| (context.gpu.capabilities().sample_count_mask & n) != 0)
- })
- .unwrap_or(1);
- let gamma = env::var("ZED_FONTS_GAMMA")
- .ok()
- .and_then(|v| v.parse().ok())
- .unwrap_or(1.8_f32)
- .clamp(1.0, 2.2);
- let gamma_ratios = get_gamma_correction_ratios(gamma);
- let grayscale_enhanced_contrast = env::var("ZED_FONTS_GRAYSCALE_ENHANCED_CONTRAST")
- .ok()
- .and_then(|v| v.parse().ok())
- .unwrap_or(1.0_f32)
- .max(0.0);
- let subpixel_enhanced_contrast = env::var("ZED_FONTS_SUBPIXEL_ENHANCED_CONTRAST")
- .ok()
- .and_then(|v| v.parse().ok())
- .unwrap_or(0.5_f32)
- .max(0.0);
-
- Self {
- path_sample_count,
- gamma_ratios,
- grayscale_enhanced_contrast,
- subpixel_enhanced_contrast,
- }
- }
-}
@@ -24,10 +24,12 @@ use xkbcommon::xkb::{self, Keycode, Keysym, State};
use crate::{
Action, AnyWindowHandle, BackgroundExecutor, ClipboardItem, CursorStyle, DisplayId,
ForegroundExecutor, Keymap, LinuxDispatcher, Menu, MenuItem, OwnedMenu, PathPromptOptions,
- Pixels, Platform, PlatformDisplay, PlatformKeyboardLayout, PlatformKeyboardMapper,
- PlatformTextSystem, PlatformWindow, Point, PriorityQueueCalloopReceiver, Result,
- RunnableVariant, Task, ThermalState, WindowAppearance, WindowParams, px,
+ Platform, PlatformDisplay, PlatformKeyboardLayout, PlatformKeyboardMapper, PlatformTextSystem,
+ PlatformWindow, PriorityQueueCalloopReceiver, Result, RunnableVariant, Task, ThermalState,
+ WindowAppearance, WindowParams,
};
+#[cfg(any(feature = "wayland", feature = "x11"))]
+use crate::{Pixels, Point, px};
#[cfg(any(feature = "wayland", feature = "x11"))]
pub(crate) const SCROLL_LINES: f32 = 3.0;
@@ -36,6 +38,7 @@ pub(crate) const SCROLL_LINES: f32 = 3.0;
// Taken from https://github.com/GNOME/gtk/blob/main/gtk/gtksettings.c#L320
#[cfg(any(feature = "wayland", feature = "x11"))]
pub(crate) const DOUBLE_CLICK_INTERVAL: Duration = Duration::from_millis(400);
+#[cfg(any(feature = "wayland", feature = "x11"))]
pub(crate) const DOUBLE_CLICK_DISTANCE: Pixels = px(5.0);
pub(crate) const KEYRING_LABEL: &str = "zed-github-account";
@@ -708,7 +711,7 @@ pub(super) fn reveal_path_internal(
.detach();
}
-#[allow(unused)]
+#[cfg(any(feature = "wayland", feature = "x11"))]
pub(super) fn is_within_click_distance(a: Point<Pixels>, b: Point<Pixels>) -> bool {
let diff = a - b;
diff.x.abs() <= DOUBLE_CLICK_DISTANCE && diff.y.abs() <= DOUBLE_CLICK_DISTANCE
@@ -97,7 +97,7 @@ use crate::{
};
use crate::{
TaskTiming,
- platform::{PlatformWindow, blade::BladeContext},
+ platform::{PlatformWindow, wgpu::WgpuContext},
};
/// Used to convert evdev scancode to xkb scancode
@@ -204,7 +204,7 @@ pub struct Output {
pub(crate) struct WaylandClientState {
serial_tracker: SerialTracker,
globals: Globals,
- pub gpu_context: BladeContext,
+ pub gpu_context: WgpuContext,
wl_seat: wl_seat::WlSeat, // TODO: Multi seat support
wl_pointer: Option<wl_pointer::WlPointer>,
wl_keyboard: Option<wl_keyboard::WlKeyboard>,
@@ -520,7 +520,7 @@ impl WaylandClient {
.unwrap();
// This could be unified with the notification handling in zed/main:fail_to_open_window.
- let gpu_context = BladeContext::new().notify_err("Unable to init GPU context");
+ let gpu_context = WgpuContext::new().notify_err("Unable to init GPU context");
let seat = seat.unwrap();
let globals = Globals::new(
@@ -6,7 +6,6 @@ use std::{
sync::Arc,
};
-use blade_graphics as gpu;
use collections::{FxHashSet, HashMap};
use futures::channel::oneshot::Receiver;
@@ -26,8 +25,8 @@ use wayland_protocols_plasma::blur::client::org_kde_kwin_blur;
use wayland_protocols_wlr::layer_shell::v1::client::zwlr_layer_surface_v1;
use crate::{
- AnyWindowHandle, Bounds, Decorations, Globals, GpuSpecs, Modifiers, Output, Pixels,
- PlatformDisplay, PlatformInput, Point, PromptButton, PromptLevel, RequestFrameOptions,
+ AnyWindowHandle, Bounds, Decorations, DevicePixels, Globals, GpuSpecs, Modifiers, Output,
+ Pixels, PlatformDisplay, PlatformInput, Point, PromptButton, PromptLevel, RequestFrameOptions,
ResizeEdge, Size, Tiling, WaylandClientStatePtr, WindowAppearance, WindowBackgroundAppearance,
WindowBounds, WindowControlArea, WindowControls, WindowDecorations, WindowParams, get_window,
layer_shell::LayerShellNotSupportedError, px, size,
@@ -36,8 +35,8 @@ use crate::{
Capslock,
platform::{
PlatformAtlas, PlatformInputHandler, PlatformWindow,
- blade::{BladeContext, BladeRenderer, BladeSurfaceConfig},
linux::wayland::{display::WaylandDisplay, serial::SerialKind},
+ wgpu::{WgpuContext, WgpuRenderer, WgpuSurfaceConfig},
},
};
use crate::{WindowKind, scene::Scene};
@@ -60,6 +59,12 @@ struct RawWindow {
display: *mut c_void,
}
+// Safety: The raw pointers in RawWindow point to Wayland surface/display
+// which are valid for the window's lifetime. These are used only for
+// passing to wgpu which needs Send+Sync for surface creation.
+unsafe impl Send for RawWindow {}
+unsafe impl Sync for RawWindow {}
+
impl rwh::HasWindowHandle for RawWindow {
fn window_handle(&self) -> Result<rwh::WindowHandle<'_>, rwh::HandleError> {
let window = NonNull::new(self.window).unwrap();
@@ -97,7 +102,7 @@ pub struct WaylandWindowState {
outputs: HashMap<ObjectId, Output>,
display: Option<(ObjectId, Output)>,
globals: Globals,
- renderer: BladeRenderer,
+ renderer: WgpuRenderer,
bounds: Bounds<Pixels>,
scale: f32,
input_handler: Option<PlatformInputHandler>,
@@ -314,7 +319,7 @@ impl WaylandWindowState {
viewport: Option<wp_viewport::WpViewport>,
client: WaylandClientStatePtr,
globals: Globals,
- gpu_context: &BladeContext,
+ gpu_context: &WgpuContext,
options: WindowParams,
parent: Option<WaylandWindowStatePtr>,
) -> anyhow::Result<Self> {
@@ -328,15 +333,14 @@ impl WaylandWindowState {
.display_ptr()
.cast::<c_void>(),
};
- let config = BladeSurfaceConfig {
- size: gpu::Extent {
- width: options.bounds.size.width.0 as u32,
- height: options.bounds.size.height.0 as u32,
- depth: 1,
+ let config = WgpuSurfaceConfig {
+ size: Size {
+ width: DevicePixels(options.bounds.size.width.0 as i32),
+ height: DevicePixels(options.bounds.size.height.0 as i32),
},
transparent: true,
};
- BladeRenderer::new(gpu_context, &raw_window, config)?
+ WgpuRenderer::new(gpu_context, &raw_window, config)?
};
if let WaylandSurfaceState::Xdg(ref xdg_state) = surface_state {
@@ -479,7 +483,7 @@ impl WaylandWindow {
pub fn new(
handle: AnyWindowHandle,
globals: Globals,
- gpu_context: &BladeContext,
+ gpu_context: &WgpuContext,
client: WaylandClientStatePtr,
params: WindowParams,
appearance: WindowAppearance,
@@ -50,7 +50,6 @@ use super::{
use crate::platform::{
LinuxCommon, PlatformWindow,
- blade::BladeContext,
linux::{
DEFAULT_CURSOR_ICON_NAME, LinuxClient, get_xkb_compose_state, is_within_click_distance,
log_cursor_icon_warning, open_uri_internal,
@@ -58,6 +57,7 @@ use crate::platform::{
reveal_path_internal,
xdg_desktop_portal::{Event as XDPEvent, XDPEventSource},
},
+ wgpu::WgpuContext,
};
use crate::{
AnyWindowHandle, Bounds, ClipboardItem, CursorStyle, DisplayId, FileDropEvent, Keystroke,
@@ -177,7 +177,7 @@ pub struct X11ClientState {
pub(crate) last_location: Point<Pixels>,
pub(crate) current_count: usize,
- pub(crate) gpu_context: BladeContext,
+ pub(crate) gpu_context: WgpuContext,
pub(crate) scale_factor: f32,
@@ -420,7 +420,7 @@ impl X11Client {
.to_string();
let keyboard_layout = LinuxKeyboardLayout::new(layout_name.into());
- let gpu_context = BladeContext::new().notify_err("Unable to init GPU context");
+ let gpu_context = WgpuContext::new().notify_err("Unable to init GPU context");
let resource_database = x11rb::resource_manager::new_from_default(&xcb_connection)
.context("Failed to create resource database")?;
@@ -1,16 +1,15 @@
use anyhow::{Context as _, anyhow};
use x11rb::connection::RequestConnection;
-use crate::platform::blade::{BladeContext, BladeRenderer, BladeSurfaceConfig};
+use crate::platform::wgpu::{WgpuContext, WgpuRenderer, WgpuSurfaceConfig};
use crate::{
AnyWindowHandle, Bounds, Decorations, DevicePixels, ForegroundExecutor, GpuSpecs, Modifiers,
Pixels, PlatformAtlas, PlatformDisplay, PlatformInput, PlatformInputHandler, PlatformWindow,
Point, PromptButton, PromptLevel, RequestFrameOptions, ResizeEdge, ScaledPixels, Scene, Size,
Tiling, WindowAppearance, WindowBackgroundAppearance, WindowBounds, WindowControlArea,
- WindowDecorations, WindowKind, WindowParams, X11ClientStatePtr, px, size,
+ WindowDecorations, WindowKind, WindowParams, X11ClientStatePtr, px,
};
-use blade_graphics as gpu;
use collections::FxHashSet;
use raw_window_handle as rwh;
use util::{ResultExt, maybe};
@@ -89,12 +88,11 @@ x11rb::atom_manager! {
fn query_render_extent(
xcb: &Rc<XCBConnection>,
x_window: xproto::Window,
-) -> anyhow::Result<gpu::Extent> {
+) -> anyhow::Result<Size<DevicePixels>> {
let reply = get_reply(|| "X11 GetGeometry failed.", xcb.get_geometry(x_window))?;
- Ok(gpu::Extent {
- width: reply.width as u32,
- height: reply.height as u32,
- depth: 1,
+ Ok(Size {
+ width: DevicePixels(reply.width as i32),
+ height: DevicePixels(reply.height as i32),
})
}
@@ -236,6 +234,12 @@ struct RawWindow {
visual_id: u32,
}
+// Safety: The raw pointers in RawWindow point to X11 connection
+// which is valid for the window's lifetime. These are used only for
+// passing to wgpu which needs Send+Sync for surface creation.
+unsafe impl Send for RawWindow {}
+unsafe impl Sync for RawWindow {}
+
#[derive(Default)]
pub struct Callbacks {
request_frame: Option<Box<dyn FnMut(RequestFrameOptions)>>,
@@ -261,7 +265,7 @@ pub struct X11WindowState {
pub(crate) last_sync_counter: Option<sync::Int64>,
bounds: Bounds<Pixels>,
scale_factor: f32,
- renderer: BladeRenderer,
+ renderer: WgpuRenderer,
display: Rc<dyn PlatformDisplay>,
input_handler: Option<PlatformInputHandler>,
appearance: WindowAppearance,
@@ -389,7 +393,7 @@ impl X11WindowState {
handle: AnyWindowHandle,
client: X11ClientStatePtr,
executor: ForegroundExecutor,
- gpu_context: &BladeContext,
+ gpu_context: &WgpuContext,
params: WindowParams,
xcb: &Rc<XCBConnection>,
client_side_decorations_supported: bool,
@@ -682,7 +686,7 @@ impl X11WindowState {
window_id: x_window,
visual_id: visual.id,
};
- let config = BladeSurfaceConfig {
+ let config = WgpuSurfaceConfig {
// Note: this has to be done after the GPU init, or otherwise
// the sizes are immediately invalidated.
size: query_render_extent(xcb, x_window)?,
@@ -692,7 +696,7 @@ impl X11WindowState {
// too
transparent: false,
};
- BladeRenderer::new(gpu_context, &raw_window, config)?
+ WgpuRenderer::new(gpu_context, &raw_window, config)?
};
let display = Rc::new(X11Display::new(xcb, scale_factor, x_screen_index)?);
@@ -740,11 +744,7 @@ impl X11WindowState {
}
fn content_size(&self) -> Size<Pixels> {
- let size = self.renderer.viewport_size();
- Size {
- width: size.width.into(),
- height: size.height.into(),
- }
+ self.bounds.size
}
}
@@ -800,7 +800,7 @@ impl X11Window {
handle: AnyWindowHandle,
client: X11ClientStatePtr,
executor: ForegroundExecutor,
- gpu_context: &BladeContext,
+ gpu_context: &WgpuContext,
params: WindowParams,
xcb: &Rc<XCBConnection>,
client_side_decorations_supported: bool,
@@ -1167,10 +1167,7 @@ impl X11WindowStatePtr {
let gpu_size = query_render_extent(&self.xcb, self.x_window)?;
if true {
- state.renderer.update_drawable_size(size(
- DevicePixels(gpu_size.width as i32),
- DevicePixels(gpu_size.height as i32),
- ));
+ state.renderer.update_drawable_size(gpu_size);
resize_args = Some((state.content_size(), state.scale_factor));
}
if let Some(value) = state.last_sync_counter.take() {
@@ -10,18 +10,12 @@ mod pasteboard;
#[cfg(feature = "screen-capture")]
mod screen_capture;
-#[cfg(not(feature = "macos-blade"))]
mod metal_atlas;
-#[cfg(not(feature = "macos-blade"))]
pub mod metal_renderer;
use core_video::image_buffer::CVImageBuffer;
-#[cfg(not(feature = "macos-blade"))]
use metal_renderer as renderer;
-#[cfg(feature = "macos-blade")]
-use crate::platform::blade as renderer;
-
#[cfg(feature = "font-kit")]
mod open_type;
@@ -2116,7 +2116,6 @@ extern "C" fn window_did_change_key_status(this: &Object, selector: Sel, _: id)
if lock.activated_least_once {
if let Some(mut callback) = lock.request_frame_callback.take() {
- #[cfg(not(feature = "macos-blade"))]
lock.renderer.set_presents_with_transaction(true);
lock.stop_display_link();
drop(lock);
@@ -2124,7 +2123,6 @@ extern "C" fn window_did_change_key_status(this: &Object, selector: Sel, _: id)
let mut lock = window_state.lock();
lock.request_frame_callback = Some(callback);
- #[cfg(not(feature = "macos-blade"))]
lock.renderer.set_presents_with_transaction(false);
lock.start_display_link();
}
@@ -2224,7 +2222,6 @@ extern "C" fn display_layer(this: &Object, _: Sel, _: id) {
let window_state = unsafe { get_window_state(this) };
let mut lock = window_state.lock();
if let Some(mut callback) = lock.request_frame_callback.take() {
- #[cfg(not(feature = "macos-blade"))]
lock.renderer.set_presents_with_transaction(true);
lock.stop_display_link();
drop(lock);
@@ -2232,7 +2229,6 @@ extern "C" fn display_layer(this: &Object, _: Sel, _: id) {
let mut lock = window_state.lock();
lock.request_frame_callback = Some(callback);
- #[cfg(not(feature = "macos-blade"))]
lock.renderer.set_presents_with_transaction(false);
lock.start_display_link();
}
@@ -0,0 +1,7 @@
+mod wgpu_atlas;
+mod wgpu_context;
+mod wgpu_renderer;
+
+pub(crate) use wgpu_atlas::*;
+pub(crate) use wgpu_context::*;
+pub(crate) use wgpu_renderer::*;
@@ -84,12 +84,17 @@ struct GlobalParams {
pad: u32,
}
-var<uniform> globals: GlobalParams;
-var<uniform> gamma_ratios: vec4<f32>;
-var<uniform> grayscale_enhanced_contrast: f32;
-var<uniform> subpixel_enhanced_contrast: f32;
-var t_sprite: texture_2d<f32>;
-var s_sprite: sampler;
+struct GammaParams {
+ gamma_ratios: vec4<f32>,
+ grayscale_enhanced_contrast: f32,
+ subpixel_enhanced_contrast: f32,
+ pad: vec2<f32>,
+}
+
+@group(0) @binding(0) var<uniform> globals: GlobalParams;
+@group(0) @binding(1) var<uniform> gamma_params: GammaParams;
+@group(1) @binding(1) var t_sprite: texture_2d<f32>;
+@group(1) @binding(2) var s_sprite: sampler;
const M_PI_F: f32 = 3.1415926;
const GRAYSCALE_FACTORS: vec3<f32> = vec3<f32>(0.2126, 0.7152, 0.0722);
@@ -521,7 +526,7 @@ struct Quad {
corner_radii: Corners,
border_widths: Edges,
}
-var<storage, read> b_quads: array<Quad>;
+@group(1) @binding(0) var<storage, read> b_quads: array<Quad>;
struct QuadVarying {
@builtin(position) position: vec4<f32>,
@@ -951,7 +956,7 @@ struct Shadow {
content_mask: Bounds,
color: Hsla,
}
-var<storage, read> b_shadows: array<Shadow>;
+@group(1) @binding(0) var<storage, read> b_shadows: array<Shadow>;
struct ShadowVarying {
@builtin(position) position: vec4<f32>,
@@ -1023,7 +1028,7 @@ struct PathRasterizationVertex {
bounds: Bounds,
}
-var<storage, read> b_path_vertices: array<PathRasterizationVertex>;
+@group(1) @binding(0) var<storage, read> b_path_vertices: array<PathRasterizationVertex>;
struct PathRasterizationVarying {
@builtin(position) position: vec4<f32>,
@@ -1083,7 +1088,7 @@ fn fs_path_rasterization(input: PathRasterizationVarying) -> @location(0) vec4<f
struct PathSprite {
bounds: Bounds,
}
-var<storage, read> b_path_sprites: array<PathSprite>;
+@group(1) @binding(0) var<storage, read> b_path_sprites: array<PathSprite>;
struct PathVarying {
@builtin(position) position: vec4<f32>,
@@ -1124,7 +1129,7 @@ struct Underline {
thickness: f32,
wavy: u32,
}
-var<storage, read> b_underlines: array<Underline>;
+@group(1) @binding(0) var<storage, read> b_underlines: array<Underline>;
struct UnderlineVarying {
@builtin(position) position: vec4<f32>,
@@ -1190,7 +1195,7 @@ struct MonochromeSprite {
tile: AtlasTile,
transformation: TransformationMatrix,
}
-var<storage, read> b_mono_sprites: array<MonochromeSprite>;
+@group(1) @binding(0) var<storage, read> b_mono_sprites: array<MonochromeSprite>;
struct MonoSpriteVarying {
@builtin(position) position: vec4<f32>,
@@ -1216,7 +1221,7 @@ fn vs_mono_sprite(@builtin(vertex_index) vertex_id: u32, @builtin(instance_index
@fragment
fn fs_mono_sprite(input: MonoSpriteVarying) -> @location(0) vec4<f32> {
let sample = textureSample(t_sprite, s_sprite, input.tile_position).r;
- let alpha_corrected = apply_contrast_and_gamma_correction(sample, input.color.rgb, grayscale_enhanced_contrast, gamma_ratios);
+ let alpha_corrected = apply_contrast_and_gamma_correction(sample, input.color.rgb, gamma_params.grayscale_enhanced_contrast, gamma_params.gamma_ratios);
// Alpha clip after using the derivatives.
if (any(input.clip_distances < vec4<f32>(0.0))) {
@@ -1238,7 +1243,7 @@ struct PolychromeSprite {
corner_radii: Corners,
tile: AtlasTile,
}
-var<storage, read> b_poly_sprites: array<PolychromeSprite>;
+@group(1) @binding(0) var<storage, read> b_poly_sprites: array<PolychromeSprite>;
struct PolySpriteVarying {
@builtin(position) position: vec4<f32>,
@@ -1286,10 +1291,10 @@ struct SurfaceParams {
content_mask: Bounds,
}
-var<uniform> surface_locals: SurfaceParams;
-var t_y: texture_2d<f32>;
-var t_cb_cr: texture_2d<f32>;
-var s_surface: sampler;
+@group(1) @binding(0) var<uniform> surface_locals: SurfaceParams;
+@group(1) @binding(1) var t_y: texture_2d<f32>;
+@group(1) @binding(2) var t_cb_cr: texture_2d<f32>;
+@group(1) @binding(3) var s_surface: sampler;
const ycbcr_to_RGB = mat4x4<f32>(
vec4<f32>( 1.0000f, 1.0000f, 1.0000f, 0.0),
@@ -1341,7 +1346,7 @@ struct SubpixelSprite {
tile: AtlasTile,
transformation: TransformationMatrix,
}
-var<storage, read> b_subpixel_sprites: array<SubpixelSprite>;
+@group(1) @binding(0) var<storage, read> b_subpixel_sprites: array<SubpixelSprite>;
struct SubpixelSpriteOutput {
@builtin(position) position: vec4<f32>,
@@ -1371,7 +1376,7 @@ fn vs_subpixel_sprite(@builtin(vertex_index) vertex_id: u32, @builtin(instance_i
@fragment
fn fs_subpixel_sprite(input: SubpixelSpriteOutput) -> SubpixelSpriteFragmentOutput {
let sample = textureSample(t_sprite, s_sprite, input.tile_position).rgb;
- let alpha_corrected = apply_contrast_and_gamma_correction3(sample, input.color.rgb, subpixel_enhanced_contrast, gamma_ratios);
+ let alpha_corrected = apply_contrast_and_gamma_correction3(sample, input.color.rgb, gamma_params.subpixel_enhanced_contrast, gamma_params.gamma_ratios);
// Alpha clip after using the derivatives.
if (any(input.clip_distances < vec4<f32>(0.0))) {
@@ -0,0 +1,320 @@
+use crate::{
+ AtlasKey, AtlasTextureId, AtlasTextureKind, AtlasTile, Bounds, DevicePixels, PlatformAtlas,
+ Point, Size, platform::AtlasTextureList,
+};
+use anyhow::Result;
+use collections::FxHashMap;
+use etagere::{BucketedAtlasAllocator, size2};
+use parking_lot::Mutex;
+use std::{borrow::Cow, ops, sync::Arc};
+
+fn device_size_to_etagere(size: Size<DevicePixels>) -> etagere::Size {
+ size2(size.width.0, size.height.0)
+}
+
+fn etagere_point_to_device(point: etagere::Point) -> Point<DevicePixels> {
+ Point {
+ x: DevicePixels(point.x),
+ y: DevicePixels(point.y),
+ }
+}
+
+pub(crate) struct WgpuAtlas(Mutex<WgpuAtlasState>);
+
+struct PendingUpload {
+ id: AtlasTextureId,
+ bounds: Bounds<DevicePixels>,
+ data: Vec<u8>,
+}
+
+struct WgpuAtlasState {
+ device: Arc<wgpu::Device>,
+ queue: Arc<wgpu::Queue>,
+ storage: WgpuAtlasStorage,
+ tiles_by_key: FxHashMap<AtlasKey, AtlasTile>,
+ pending_uploads: Vec<PendingUpload>,
+}
+
+pub struct WgpuTextureInfo {
+ pub view: wgpu::TextureView,
+}
+
+impl WgpuAtlas {
+ pub(crate) fn new(device: Arc<wgpu::Device>, queue: Arc<wgpu::Queue>) -> Self {
+ WgpuAtlas(Mutex::new(WgpuAtlasState {
+ device,
+ queue,
+ storage: WgpuAtlasStorage::default(),
+ tiles_by_key: Default::default(),
+ pending_uploads: Vec::new(),
+ }))
+ }
+
+ pub fn before_frame(&self) {
+ let mut lock = self.0.lock();
+ lock.flush_uploads();
+ }
+
+ pub fn get_texture_info(&self, id: AtlasTextureId) -> WgpuTextureInfo {
+ let lock = self.0.lock();
+ let texture = &lock.storage[id];
+ WgpuTextureInfo {
+ view: texture.view.clone(),
+ }
+ }
+}
+
+impl PlatformAtlas for WgpuAtlas {
+ fn get_or_insert_with<'a>(
+ &self,
+ key: &AtlasKey,
+ build: &mut dyn FnMut() -> Result<Option<(Size<DevicePixels>, Cow<'a, [u8]>)>>,
+ ) -> Result<Option<AtlasTile>> {
+ let mut lock = self.0.lock();
+ if let Some(tile) = lock.tiles_by_key.get(key) {
+ Ok(Some(tile.clone()))
+ } else {
+ profiling::scope!("new tile");
+ let Some((size, bytes)) = build()? else {
+ return Ok(None);
+ };
+ let tile = lock.allocate(size, key.texture_kind());
+ lock.upload_texture(tile.texture_id, tile.bounds, &bytes);
+ lock.tiles_by_key.insert(key.clone(), tile.clone());
+ Ok(Some(tile))
+ }
+ }
+
+ fn remove(&self, key: &AtlasKey) {
+ let mut lock = self.0.lock();
+
+ let Some(id) = lock.tiles_by_key.remove(key).map(|tile| tile.texture_id) else {
+ return;
+ };
+
+ let Some(texture_slot) = lock.storage[id.kind].textures.get_mut(id.index as usize) else {
+ return;
+ };
+
+ if let Some(mut texture) = texture_slot.take() {
+ texture.decrement_ref_count();
+ if texture.is_unreferenced() {
+ lock.storage[id.kind]
+ .free_list
+ .push(texture.id.index as usize);
+ } else {
+ *texture_slot = Some(texture);
+ }
+ }
+ }
+}
+
+impl WgpuAtlasState {
+ fn allocate(&mut self, size: Size<DevicePixels>, texture_kind: AtlasTextureKind) -> AtlasTile {
+ {
+ let textures = &mut self.storage[texture_kind];
+
+ if let Some(tile) = textures
+ .iter_mut()
+ .rev()
+ .find_map(|texture| texture.allocate(size))
+ {
+ return tile;
+ }
+ }
+
+ let texture = self.push_texture(size, texture_kind);
+ texture
+ .allocate(size)
+ .expect("Failed to allocate from newly created texture")
+ }
+
+ fn push_texture(
+ &mut self,
+ min_size: Size<DevicePixels>,
+ kind: AtlasTextureKind,
+ ) -> &mut WgpuAtlasTexture {
+ const DEFAULT_ATLAS_SIZE: Size<DevicePixels> = Size {
+ width: DevicePixels(1024),
+ height: DevicePixels(1024),
+ };
+
+ let size = min_size.max(&DEFAULT_ATLAS_SIZE);
+ let format = match kind {
+ AtlasTextureKind::Monochrome => wgpu::TextureFormat::R8Unorm,
+ AtlasTextureKind::Subpixel => wgpu::TextureFormat::Bgra8Unorm,
+ AtlasTextureKind::Polychrome => wgpu::TextureFormat::Bgra8Unorm,
+ };
+
+ let texture = self.device.create_texture(&wgpu::TextureDescriptor {
+ label: Some("atlas"),
+ size: wgpu::Extent3d {
+ width: size.width.0 as u32,
+ height: size.height.0 as u32,
+ depth_or_array_layers: 1,
+ },
+ mip_level_count: 1,
+ sample_count: 1,
+ dimension: wgpu::TextureDimension::D2,
+ format,
+ usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
+ view_formats: &[],
+ });
+
+ let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
+
+ let texture_list = &mut self.storage[kind];
+ let index = texture_list.free_list.pop();
+
+ let atlas_texture = WgpuAtlasTexture {
+ id: AtlasTextureId {
+ index: index.unwrap_or(texture_list.textures.len()) as u32,
+ kind,
+ },
+ allocator: BucketedAtlasAllocator::new(device_size_to_etagere(size)),
+ format,
+ texture,
+ view,
+ live_atlas_keys: 0,
+ };
+
+ if let Some(ix) = index {
+ texture_list.textures[ix] = Some(atlas_texture);
+ texture_list
+ .textures
+ .get_mut(ix)
+ .and_then(|t| t.as_mut())
+ .expect("texture must exist")
+ } else {
+ texture_list.textures.push(Some(atlas_texture));
+ texture_list
+ .textures
+ .last_mut()
+ .and_then(|t| t.as_mut())
+ .expect("texture must exist")
+ }
+ }
+
+ fn upload_texture(&mut self, id: AtlasTextureId, bounds: Bounds<DevicePixels>, bytes: &[u8]) {
+ self.pending_uploads.push(PendingUpload {
+ id,
+ bounds,
+ data: bytes.to_vec(),
+ });
+ }
+
+ fn flush_uploads(&mut self) {
+ for upload in self.pending_uploads.drain(..) {
+ let texture = &self.storage[upload.id];
+ let bytes_per_pixel = texture.bytes_per_pixel();
+
+ self.queue.write_texture(
+ wgpu::TexelCopyTextureInfo {
+ texture: &texture.texture,
+ mip_level: 0,
+ origin: wgpu::Origin3d {
+ x: upload.bounds.origin.x.0 as u32,
+ y: upload.bounds.origin.y.0 as u32,
+ z: 0,
+ },
+ aspect: wgpu::TextureAspect::All,
+ },
+ &upload.data,
+ wgpu::TexelCopyBufferLayout {
+ offset: 0,
+ bytes_per_row: Some(upload.bounds.size.width.0 as u32 * bytes_per_pixel as u32),
+ rows_per_image: None,
+ },
+ wgpu::Extent3d {
+ width: upload.bounds.size.width.0 as u32,
+ height: upload.bounds.size.height.0 as u32,
+ depth_or_array_layers: 1,
+ },
+ );
+ }
+ }
+}
+
+#[derive(Default)]
+struct WgpuAtlasStorage {
+ monochrome_textures: AtlasTextureList<WgpuAtlasTexture>,
+ subpixel_textures: AtlasTextureList<WgpuAtlasTexture>,
+ polychrome_textures: AtlasTextureList<WgpuAtlasTexture>,
+}
+
+impl ops::Index<AtlasTextureKind> for WgpuAtlasStorage {
+ type Output = AtlasTextureList<WgpuAtlasTexture>;
+ fn index(&self, kind: AtlasTextureKind) -> &Self::Output {
+ match kind {
+ AtlasTextureKind::Monochrome => &self.monochrome_textures,
+ AtlasTextureKind::Subpixel => &self.subpixel_textures,
+ AtlasTextureKind::Polychrome => &self.polychrome_textures,
+ }
+ }
+}
+
+impl ops::IndexMut<AtlasTextureKind> for WgpuAtlasStorage {
+ fn index_mut(&mut self, kind: AtlasTextureKind) -> &mut Self::Output {
+ match kind {
+ AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
+ AtlasTextureKind::Subpixel => &mut self.subpixel_textures,
+ AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
+ }
+ }
+}
+
+impl ops::Index<AtlasTextureId> for WgpuAtlasStorage {
+ type Output = WgpuAtlasTexture;
+ fn index(&self, id: AtlasTextureId) -> &Self::Output {
+ let textures = match id.kind {
+ AtlasTextureKind::Monochrome => &self.monochrome_textures,
+ AtlasTextureKind::Subpixel => &self.subpixel_textures,
+ AtlasTextureKind::Polychrome => &self.polychrome_textures,
+ };
+ textures[id.index as usize]
+ .as_ref()
+ .expect("texture must exist")
+ }
+}
+
+struct WgpuAtlasTexture {
+ id: AtlasTextureId,
+ allocator: BucketedAtlasAllocator,
+ texture: wgpu::Texture,
+ view: wgpu::TextureView,
+ format: wgpu::TextureFormat,
+ live_atlas_keys: u32,
+}
+
+impl WgpuAtlasTexture {
+ fn allocate(&mut self, size: Size<DevicePixels>) -> Option<AtlasTile> {
+ let allocation = self.allocator.allocate(device_size_to_etagere(size))?;
+ let tile = AtlasTile {
+ texture_id: self.id,
+ tile_id: allocation.id.into(),
+ padding: 0,
+ bounds: Bounds {
+ origin: etagere_point_to_device(allocation.rectangle.min),
+ size,
+ },
+ };
+ self.live_atlas_keys += 1;
+ Some(tile)
+ }
+
+ fn bytes_per_pixel(&self) -> u8 {
+ match self.format {
+ wgpu::TextureFormat::R8Unorm => 1,
+ wgpu::TextureFormat::Bgra8Unorm => 4,
+ _ => 4,
+ }
+ }
+
+ fn decrement_ref_count(&mut self) {
+ self.live_atlas_keys -= 1;
+ }
+
+ fn is_unreferenced(&self) -> bool {
+ self.live_atlas_keys == 0
+ }
+}
@@ -0,0 +1,169 @@
+use anyhow::Context as _;
+use std::sync::Arc;
+use util::ResultExt;
+
+pub struct WgpuContext {
+ pub instance: wgpu::Instance,
+ pub adapter: wgpu::Adapter,
+ pub device: Arc<wgpu::Device>,
+ pub queue: Arc<wgpu::Queue>,
+ dual_source_blending: bool,
+}
+
+impl WgpuContext {
+ pub fn new() -> anyhow::Result<Self> {
+ let device_id_filter = match std::env::var("ZED_DEVICE_ID") {
+ Ok(val) => parse_pci_id(&val)
+ .context("Failed to parse device ID from `ZED_DEVICE_ID` environment variable")
+ .log_err(),
+ Err(std::env::VarError::NotPresent) => None,
+ err => {
+ err.context("Failed to read value of `ZED_DEVICE_ID` environment variable")
+ .log_err();
+ None
+ }
+ };
+
+ let instance = wgpu::Instance::new(&wgpu::InstanceDescriptor {
+ backends: wgpu::Backends::VULKAN | wgpu::Backends::GL,
+ flags: wgpu::InstanceFlags::default(),
+ backend_options: wgpu::BackendOptions::default(),
+ memory_budget_thresholds: wgpu::MemoryBudgetThresholds::default(),
+ });
+
+ let adapter = smol::block_on(Self::select_adapter(&instance, device_id_filter))?;
+
+ log::info!(
+ "Selected GPU adapter: {:?} ({:?})",
+ adapter.get_info().name,
+ adapter.get_info().backend
+ );
+
+ let dual_source_blending_available = adapter
+ .features()
+ .contains(wgpu::Features::DUAL_SOURCE_BLENDING);
+
+ let mut required_features = wgpu::Features::empty();
+ if dual_source_blending_available {
+ required_features |= wgpu::Features::DUAL_SOURCE_BLENDING;
+ } else {
+ log::warn!(
+ "Dual-source blending not available on this GPU. \
+ Subpixel text antialiasing will be disabled."
+ );
+ }
+
+ let (device, queue) = smol::block_on(adapter.request_device(&wgpu::DeviceDescriptor {
+ label: Some("gpui_device"),
+ required_features,
+ required_limits: wgpu::Limits::default(),
+ memory_hints: wgpu::MemoryHints::MemoryUsage,
+ trace: wgpu::Trace::Off,
+ experimental_features: wgpu::ExperimentalFeatures::disabled(),
+ }))
+ .map_err(|e| anyhow::anyhow!("Failed to create wgpu device: {e}"))?;
+
+ Ok(Self {
+ instance,
+ adapter,
+ device: Arc::new(device),
+ queue: Arc::new(queue),
+ dual_source_blending: dual_source_blending_available,
+ })
+ }
+
+ async fn select_adapter(
+ instance: &wgpu::Instance,
+ device_id_filter: Option<u32>,
+ ) -> anyhow::Result<wgpu::Adapter> {
+ if let Some(device_id) = device_id_filter {
+ let adapters: Vec<_> = instance.enumerate_adapters(wgpu::Backends::all()).await;
+
+ if adapters.is_empty() {
+ anyhow::bail!("No GPU adapters found");
+ }
+
+ let mut non_matching_adapter_infos: Vec<wgpu::AdapterInfo> = Vec::new();
+
+ for adapter in adapters.into_iter() {
+ let info = adapter.get_info();
+ if info.device == device_id {
+ log::info!(
+ "Found GPU matching ZED_DEVICE_ID={:#06x}: {}",
+ device_id,
+ info.name
+ );
+ return Ok(adapter);
+ } else {
+ non_matching_adapter_infos.push(info);
+ }
+ }
+
+ log::warn!(
+ "No GPU found matching ZED_DEVICE_ID={:#06x}. Available devices:",
+ device_id
+ );
+
+ for info in &non_matching_adapter_infos {
+ log::warn!(
+ " - {} (device_id={:#06x}, backend={})",
+ info.name,
+ info.device,
+ info.backend
+ );
+ }
+ }
+
+ instance
+ .request_adapter(&wgpu::RequestAdapterOptions {
+ power_preference: wgpu::PowerPreference::None,
+ compatible_surface: None,
+ force_fallback_adapter: false,
+ })
+ .await
+ .map_err(|e| anyhow::anyhow!("Failed to request GPU adapter: {e}"))
+ }
+
+ pub fn supports_dual_source_blending(&self) -> bool {
+ self.dual_source_blending
+ }
+}
+
+fn parse_pci_id(id: &str) -> anyhow::Result<u32> {
+ let mut id = id.trim();
+
+ if id.starts_with("0x") || id.starts_with("0X") {
+ id = &id[2..];
+ }
+ let is_hex_string = id.chars().all(|c| c.is_ascii_hexdigit());
+ let is_4_chars = id.len() == 4;
+ anyhow::ensure!(
+ is_4_chars && is_hex_string,
+ "Expected a 4 digit PCI ID in hexadecimal format"
+ );
+
+ u32::from_str_radix(id, 16).context("parsing PCI ID as hex")
+}
+
+#[cfg(test)]
+mod tests {
+ use super::parse_pci_id;
+
+ #[test]
+ fn test_parse_device_id() {
+ assert!(parse_pci_id("0xABCD").is_ok());
+ assert!(parse_pci_id("ABCD").is_ok());
+ assert!(parse_pci_id("abcd").is_ok());
+ assert!(parse_pci_id("1234").is_ok());
+ assert!(parse_pci_id("123").is_err());
+ assert_eq!(
+ parse_pci_id(&format!("{:x}", 0x1234)).unwrap(),
+ parse_pci_id(&format!("{:X}", 0x1234)).unwrap(),
+ );
+
+ assert_eq!(
+ parse_pci_id(&format!("{:#x}", 0x1234)).unwrap(),
+ parse_pci_id(&format!("{:#X}", 0x1234)).unwrap(),
+ );
+ }
+}
@@ -0,0 +1,1390 @@
+use super::{WgpuAtlas, WgpuContext};
+use crate::{
+ AtlasTextureId, Background, Bounds, DevicePixels, GpuSpecs, MonochromeSprite, Path, Point,
+ PolychromeSprite, PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size, SubpixelSprite,
+ Underline, get_gamma_correction_ratios,
+};
+use bytemuck::{Pod, Zeroable};
+use raw_window_handle::{HasDisplayHandle, HasWindowHandle};
+use std::num::NonZeroU64;
+use std::sync::Arc;
+
+#[repr(C)]
+#[derive(Clone, Copy, Pod, Zeroable)]
+struct GlobalParams {
+ viewport_size: [f32; 2],
+ premultiplied_alpha: u32,
+ pad: u32,
+}
+
+#[repr(C)]
+#[derive(Clone, Copy, Pod, Zeroable)]
+struct PodBounds {
+ origin: [f32; 2],
+ size: [f32; 2],
+}
+
+impl From<Bounds<ScaledPixels>> for PodBounds {
+ fn from(bounds: Bounds<ScaledPixels>) -> Self {
+ Self {
+ origin: [bounds.origin.x.0, bounds.origin.y.0],
+ size: [bounds.size.width.0, bounds.size.height.0],
+ }
+ }
+}
+
+#[repr(C)]
+#[derive(Clone, Copy, Pod, Zeroable)]
+struct SurfaceParams {
+ bounds: PodBounds,
+ content_mask: PodBounds,
+}
+
+#[repr(C)]
+#[derive(Clone, Copy, Pod, Zeroable)]
+struct GammaParams {
+ gamma_ratios: [f32; 4],
+ grayscale_enhanced_contrast: f32,
+ subpixel_enhanced_contrast: f32,
+ _pad: [f32; 2],
+}
+
+#[derive(Clone, Debug)]
+#[repr(C)]
+struct PathSprite {
+ bounds: Bounds<ScaledPixels>,
+}
+
+#[derive(Clone, Debug)]
+#[repr(C)]
+struct PathRasterizationVertex {
+ xy_position: Point<ScaledPixels>,
+ st_position: Point<f32>,
+ color: Background,
+ bounds: Bounds<ScaledPixels>,
+}
+
+pub struct WgpuSurfaceConfig {
+ pub size: Size<DevicePixels>,
+ pub transparent: bool,
+}
+
+struct WgpuPipelines {
+ quads: wgpu::RenderPipeline,
+ shadows: wgpu::RenderPipeline,
+ path_rasterization: wgpu::RenderPipeline,
+ paths: wgpu::RenderPipeline,
+ underlines: wgpu::RenderPipeline,
+ mono_sprites: wgpu::RenderPipeline,
+ subpixel_sprites: Option<wgpu::RenderPipeline>,
+ poly_sprites: wgpu::RenderPipeline,
+ #[allow(dead_code)]
+ surfaces: wgpu::RenderPipeline,
+}
+
+struct WgpuBindGroupLayouts {
+ globals: wgpu::BindGroupLayout,
+ instances: wgpu::BindGroupLayout,
+ instances_with_texture: wgpu::BindGroupLayout,
+ surfaces: wgpu::BindGroupLayout,
+}
+
+pub struct WgpuRenderer {
+ device: Arc<wgpu::Device>,
+ queue: Arc<wgpu::Queue>,
+ surface: wgpu::Surface<'static>,
+ surface_config: wgpu::SurfaceConfiguration,
+ pipelines: WgpuPipelines,
+ bind_group_layouts: WgpuBindGroupLayouts,
+ atlas: Arc<WgpuAtlas>,
+ atlas_sampler: wgpu::Sampler,
+ globals_buffer: wgpu::Buffer,
+ path_globals_offset: u64,
+ gamma_offset: u64,
+ globals_bind_group: wgpu::BindGroup,
+ path_globals_bind_group: wgpu::BindGroup,
+ instance_buffer: wgpu::Buffer,
+ instance_buffer_capacity: u64,
+ storage_buffer_alignment: u64,
+ path_intermediate_texture: wgpu::Texture,
+ path_intermediate_view: wgpu::TextureView,
+ path_msaa_texture: Option<wgpu::Texture>,
+ path_msaa_view: Option<wgpu::TextureView>,
+ rendering_params: RenderingParameters,
+ dual_source_blending: bool,
+ adapter_info: wgpu::AdapterInfo,
+ transparent_alpha_mode: wgpu::CompositeAlphaMode,
+ opaque_alpha_mode: wgpu::CompositeAlphaMode,
+}
+
+impl WgpuRenderer {
+ /// Creates a new WgpuRenderer from raw window handles.
+ ///
+ /// # Safety
+ /// The caller must ensure that the window handle remains valid for the lifetime
+ /// of the returned renderer.
+ pub fn new<W: HasWindowHandle + HasDisplayHandle>(
+ context: &WgpuContext,
+ window: &W,
+ config: WgpuSurfaceConfig,
+ ) -> anyhow::Result<Self> {
+ let window_handle = window
+ .window_handle()
+ .map_err(|e| anyhow::anyhow!("Failed to get window handle: {e}"))?;
+ let display_handle = window
+ .display_handle()
+ .map_err(|e| anyhow::anyhow!("Failed to get display handle: {e}"))?;
+
+ let target = wgpu::SurfaceTargetUnsafe::RawHandle {
+ raw_display_handle: display_handle.as_raw(),
+ raw_window_handle: window_handle.as_raw(),
+ };
+
+ // Safety: The caller guarantees that the window handle is valid for the
+ // lifetime of this renderer. In practice, the RawWindow struct is created
+ // from the native window handles and the surface is dropped before the window.
+ let surface = unsafe {
+ context
+ .instance
+ .create_surface_unsafe(target)
+ .map_err(|e| anyhow::anyhow!("Failed to create surface: {e}"))?
+ };
+
+ let surface_caps = surface.get_capabilities(&context.adapter);
+ // Prefer standard 8-bit non-sRGB formats that don't require special features.
+ // Other formats like Rgba16Unorm require TEXTURE_FORMAT_16BIT_NORM which may
+ // not be available on all devices.
+ let preferred_formats = [
+ wgpu::TextureFormat::Bgra8Unorm,
+ wgpu::TextureFormat::Rgba8Unorm,
+ ];
+ let surface_format = preferred_formats
+ .iter()
+ .find(|f| surface_caps.formats.contains(f))
+ .copied()
+ .or_else(|| surface_caps.formats.iter().find(|f| !f.is_srgb()).copied())
+ .unwrap_or(surface_caps.formats[0]);
+
+ let pick_alpha_mode =
+ |preferences: &[wgpu::CompositeAlphaMode]| -> wgpu::CompositeAlphaMode {
+ preferences
+ .iter()
+ .find(|p| surface_caps.alpha_modes.contains(p))
+ .copied()
+ .unwrap_or(surface_caps.alpha_modes[0])
+ };
+
+ let transparent_alpha_mode = pick_alpha_mode(&[
+ wgpu::CompositeAlphaMode::PreMultiplied,
+ wgpu::CompositeAlphaMode::Inherit,
+ ]);
+
+ let opaque_alpha_mode = pick_alpha_mode(&[
+ wgpu::CompositeAlphaMode::Opaque,
+ wgpu::CompositeAlphaMode::Inherit,
+ ]);
+
+ let alpha_mode = if config.transparent {
+ transparent_alpha_mode
+ } else {
+ opaque_alpha_mode
+ };
+
+ let surface_config = wgpu::SurfaceConfiguration {
+ usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
+ format: surface_format,
+ width: config.size.width.0 as u32,
+ height: config.size.height.0 as u32,
+ present_mode: wgpu::PresentMode::Fifo,
+ desired_maximum_frame_latency: 2,
+ alpha_mode,
+ view_formats: vec![],
+ };
+ surface.configure(&context.device, &surface_config);
+
+ let device = Arc::clone(&context.device);
+ let queue = Arc::clone(&context.queue);
+ let dual_source_blending = context.supports_dual_source_blending();
+
+ let rendering_params = RenderingParameters::new(&context.adapter, surface_format);
+ let bind_group_layouts = Self::create_bind_group_layouts(&device);
+ let pipelines = Self::create_pipelines(
+ &device,
+ &bind_group_layouts,
+ surface_format,
+ alpha_mode,
+ rendering_params.path_sample_count,
+ dual_source_blending,
+ );
+
+ let atlas = Arc::new(WgpuAtlas::new(Arc::clone(&device), Arc::clone(&queue)));
+ let atlas_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
+ label: Some("atlas_sampler"),
+ mag_filter: wgpu::FilterMode::Linear,
+ min_filter: wgpu::FilterMode::Linear,
+ ..Default::default()
+ });
+
+ let uniform_alignment = device.limits().min_uniform_buffer_offset_alignment as u64;
+ let globals_size = std::mem::size_of::<GlobalParams>() as u64;
+ let gamma_size = std::mem::size_of::<GammaParams>() as u64;
+ let path_globals_offset = globals_size.next_multiple_of(uniform_alignment);
+ let gamma_offset = (path_globals_offset + globals_size).next_multiple_of(uniform_alignment);
+
+ let globals_buffer = device.create_buffer(&wgpu::BufferDescriptor {
+ label: Some("globals_buffer"),
+ size: gamma_offset + gamma_size,
+ usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
+ mapped_at_creation: false,
+ });
+
+ let storage_buffer_alignment = device.limits().min_storage_buffer_offset_alignment as u64;
+ let initial_instance_buffer_capacity = 2 * 1024 * 1024;
+ let instance_buffer = device.create_buffer(&wgpu::BufferDescriptor {
+ label: Some("instance_buffer"),
+ size: initial_instance_buffer_capacity,
+ usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_DST,
+ mapped_at_creation: false,
+ });
+
+ let (path_intermediate_texture, path_intermediate_view) = Self::create_path_intermediate(
+ &device,
+ surface_format,
+ config.size.width.0 as u32,
+ config.size.height.0 as u32,
+ );
+
+ let (path_msaa_texture, path_msaa_view) = Self::create_msaa_if_needed(
+ &device,
+ surface_format,
+ config.size.width.0 as u32,
+ config.size.height.0 as u32,
+ rendering_params.path_sample_count,
+ )
+ .map(|(t, v)| (Some(t), Some(v)))
+ .unwrap_or((None, None));
+
+ let globals_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
+ label: Some("globals_bind_group"),
+ layout: &bind_group_layouts.globals,
+ entries: &[
+ wgpu::BindGroupEntry {
+ binding: 0,
+ resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
+ buffer: &globals_buffer,
+ offset: 0,
+ size: Some(NonZeroU64::new(globals_size).unwrap()),
+ }),
+ },
+ wgpu::BindGroupEntry {
+ binding: 1,
+ resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
+ buffer: &globals_buffer,
+ offset: gamma_offset,
+ size: Some(NonZeroU64::new(gamma_size).unwrap()),
+ }),
+ },
+ ],
+ });
+
+ let path_globals_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
+ label: Some("path_globals_bind_group"),
+ layout: &bind_group_layouts.globals,
+ entries: &[
+ wgpu::BindGroupEntry {
+ binding: 0,
+ resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
+ buffer: &globals_buffer,
+ offset: path_globals_offset,
+ size: Some(NonZeroU64::new(globals_size).unwrap()),
+ }),
+ },
+ wgpu::BindGroupEntry {
+ binding: 1,
+ resource: wgpu::BindingResource::Buffer(wgpu::BufferBinding {
+ buffer: &globals_buffer,
+ offset: gamma_offset,
+ size: Some(NonZeroU64::new(gamma_size).unwrap()),
+ }),
+ },
+ ],
+ });
+
+ let adapter_info = context.adapter.get_info();
+
+ Ok(Self {
+ device,
+ queue,
+ surface,
+ surface_config,
+ pipelines,
+ bind_group_layouts,
+ atlas,
+ atlas_sampler,
+ globals_buffer,
+ path_globals_offset,
+ gamma_offset,
+ globals_bind_group,
+ path_globals_bind_group,
+ instance_buffer,
+ instance_buffer_capacity: initial_instance_buffer_capacity,
+ storage_buffer_alignment,
+ path_intermediate_texture,
+ path_intermediate_view,
+ path_msaa_texture,
+ path_msaa_view,
+ rendering_params,
+ dual_source_blending,
+ adapter_info,
+ transparent_alpha_mode,
+ opaque_alpha_mode,
+ })
+ }
+
+ fn create_bind_group_layouts(device: &wgpu::Device) -> WgpuBindGroupLayouts {
+ let globals =
+ device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
+ label: Some("globals_layout"),
+ entries: &[
+ wgpu::BindGroupLayoutEntry {
+ binding: 0,
+ visibility: wgpu::ShaderStages::VERTEX_FRAGMENT,
+ ty: wgpu::BindingType::Buffer {
+ ty: wgpu::BufferBindingType::Uniform,
+ has_dynamic_offset: false,
+ min_binding_size: NonZeroU64::new(
+ std::mem::size_of::<GlobalParams>() as u64
+ ),
+ },
+ count: None,
+ },
+ wgpu::BindGroupLayoutEntry {
+ binding: 1,
+ visibility: wgpu::ShaderStages::FRAGMENT,
+ ty: wgpu::BindingType::Buffer {
+ ty: wgpu::BufferBindingType::Uniform,
+ has_dynamic_offset: false,
+ min_binding_size: NonZeroU64::new(
+ std::mem::size_of::<GammaParams>() as u64
+ ),
+ },
+ count: None,
+ },
+ ],
+ });
+
+ let storage_buffer_entry = |binding: u32| wgpu::BindGroupLayoutEntry {
+ binding,
+ visibility: wgpu::ShaderStages::VERTEX_FRAGMENT,
+ ty: wgpu::BindingType::Buffer {
+ ty: wgpu::BufferBindingType::Storage { read_only: true },
+ has_dynamic_offset: false,
+ min_binding_size: None,
+ },
+ count: None,
+ };
+
+ let instances = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
+ label: Some("instances_layout"),
+ entries: &[storage_buffer_entry(0)],
+ });
+
+ let instances_with_texture =
+ device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
+ label: Some("instances_with_texture_layout"),
+ entries: &[
+ storage_buffer_entry(0),
+ wgpu::BindGroupLayoutEntry {
+ binding: 1,
+ visibility: wgpu::ShaderStages::VERTEX_FRAGMENT,
+ ty: wgpu::BindingType::Texture {
+ sample_type: wgpu::TextureSampleType::Float { filterable: true },
+ view_dimension: wgpu::TextureViewDimension::D2,
+ multisampled: false,
+ },
+ count: None,
+ },
+ wgpu::BindGroupLayoutEntry {
+ binding: 2,
+ visibility: wgpu::ShaderStages::VERTEX_FRAGMENT,
+ ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
+ count: None,
+ },
+ ],
+ });
+
+ let surfaces = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
+ label: Some("surfaces_layout"),
+ entries: &[
+ wgpu::BindGroupLayoutEntry {
+ binding: 0,
+ visibility: wgpu::ShaderStages::VERTEX_FRAGMENT,
+ ty: wgpu::BindingType::Buffer {
+ ty: wgpu::BufferBindingType::Uniform,
+ has_dynamic_offset: false,
+ min_binding_size: NonZeroU64::new(
+ std::mem::size_of::<SurfaceParams>() as u64
+ ),
+ },
+ count: None,
+ },
+ wgpu::BindGroupLayoutEntry {
+ binding: 1,
+ visibility: wgpu::ShaderStages::FRAGMENT,
+ ty: wgpu::BindingType::Texture {
+ sample_type: wgpu::TextureSampleType::Float { filterable: true },
+ view_dimension: wgpu::TextureViewDimension::D2,
+ multisampled: false,
+ },
+ count: None,
+ },
+ wgpu::BindGroupLayoutEntry {
+ binding: 2,
+ visibility: wgpu::ShaderStages::FRAGMENT,
+ ty: wgpu::BindingType::Texture {
+ sample_type: wgpu::TextureSampleType::Float { filterable: true },
+ view_dimension: wgpu::TextureViewDimension::D2,
+ multisampled: false,
+ },
+ count: None,
+ },
+ wgpu::BindGroupLayoutEntry {
+ binding: 3,
+ visibility: wgpu::ShaderStages::FRAGMENT,
+ ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
+ count: None,
+ },
+ ],
+ });
+
+ WgpuBindGroupLayouts {
+ globals,
+ instances,
+ instances_with_texture,
+ surfaces,
+ }
+ }
+
+ fn create_pipelines(
+ device: &wgpu::Device,
+ layouts: &WgpuBindGroupLayouts,
+ surface_format: wgpu::TextureFormat,
+ alpha_mode: wgpu::CompositeAlphaMode,
+ path_sample_count: u32,
+ dual_source_blending: bool,
+ ) -> WgpuPipelines {
+ let shader_source = include_str!("shaders.wgsl");
+ let shader_module = device.create_shader_module(wgpu::ShaderModuleDescriptor {
+ label: Some("gpui_shaders"),
+ source: wgpu::ShaderSource::Wgsl(shader_source.into()),
+ });
+
+ let blend_mode = match alpha_mode {
+ wgpu::CompositeAlphaMode::PreMultiplied => {
+ wgpu::BlendState::PREMULTIPLIED_ALPHA_BLENDING
+ }
+ _ => wgpu::BlendState::ALPHA_BLENDING,
+ };
+
+ let color_target = wgpu::ColorTargetState {
+ format: surface_format,
+ blend: Some(blend_mode),
+ write_mask: wgpu::ColorWrites::ALL,
+ };
+
+ let create_pipeline = |name: &str,
+ vs_entry: &str,
+ fs_entry: &str,
+ globals_layout: &wgpu::BindGroupLayout,
+ data_layout: &wgpu::BindGroupLayout,
+ topology: wgpu::PrimitiveTopology,
+ color_targets: &[Option<wgpu::ColorTargetState>],
+ sample_count: u32| {
+ let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
+ label: Some(&format!("{name}_layout")),
+ bind_group_layouts: &[globals_layout, data_layout],
+ immediate_size: 0,
+ });
+
+ device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
+ label: Some(name),
+ layout: Some(&pipeline_layout),
+ vertex: wgpu::VertexState {
+ module: &shader_module,
+ entry_point: Some(vs_entry),
+ buffers: &[],
+ compilation_options: wgpu::PipelineCompilationOptions::default(),
+ },
+ fragment: Some(wgpu::FragmentState {
+ module: &shader_module,
+ entry_point: Some(fs_entry),
+ targets: color_targets,
+ compilation_options: wgpu::PipelineCompilationOptions::default(),
+ }),
+ primitive: wgpu::PrimitiveState {
+ topology,
+ strip_index_format: None,
+ front_face: wgpu::FrontFace::Ccw,
+ cull_mode: None,
+ polygon_mode: wgpu::PolygonMode::Fill,
+ unclipped_depth: false,
+ conservative: false,
+ },
+ depth_stencil: None,
+ multisample: wgpu::MultisampleState {
+ count: sample_count,
+ mask: !0,
+ alpha_to_coverage_enabled: false,
+ },
+ multiview_mask: None,
+ cache: None,
+ })
+ };
+
+ let quads = create_pipeline(
+ "quads",
+ "vs_quad",
+ "fs_quad",
+ &layouts.globals,
+ &layouts.instances,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(color_target.clone())],
+ 1,
+ );
+
+ let shadows = create_pipeline(
+ "shadows",
+ "vs_shadow",
+ "fs_shadow",
+ &layouts.globals,
+ &layouts.instances,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(color_target.clone())],
+ 1,
+ );
+
+ let path_rasterization = create_pipeline(
+ "path_rasterization",
+ "vs_path_rasterization",
+ "fs_path_rasterization",
+ &layouts.globals,
+ &layouts.instances,
+ wgpu::PrimitiveTopology::TriangleList,
+ &[Some(wgpu::ColorTargetState {
+ format: surface_format,
+ blend: Some(wgpu::BlendState::PREMULTIPLIED_ALPHA_BLENDING),
+ write_mask: wgpu::ColorWrites::ALL,
+ })],
+ path_sample_count,
+ );
+
+ let paths_blend = wgpu::BlendState {
+ color: wgpu::BlendComponent {
+ src_factor: wgpu::BlendFactor::One,
+ dst_factor: wgpu::BlendFactor::OneMinusSrcAlpha,
+ operation: wgpu::BlendOperation::Add,
+ },
+ alpha: wgpu::BlendComponent {
+ src_factor: wgpu::BlendFactor::One,
+ dst_factor: wgpu::BlendFactor::One,
+ operation: wgpu::BlendOperation::Add,
+ },
+ };
+
+ let paths = create_pipeline(
+ "paths",
+ "vs_path",
+ "fs_path",
+ &layouts.globals,
+ &layouts.instances_with_texture,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(wgpu::ColorTargetState {
+ format: surface_format,
+ blend: Some(paths_blend),
+ write_mask: wgpu::ColorWrites::ALL,
+ })],
+ 1,
+ );
+
+ let underlines = create_pipeline(
+ "underlines",
+ "vs_underline",
+ "fs_underline",
+ &layouts.globals,
+ &layouts.instances,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(color_target.clone())],
+ 1,
+ );
+
+ let mono_sprites = create_pipeline(
+ "mono_sprites",
+ "vs_mono_sprite",
+ "fs_mono_sprite",
+ &layouts.globals,
+ &layouts.instances_with_texture,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(color_target.clone())],
+ 1,
+ );
+
+ let subpixel_sprites = if dual_source_blending {
+ let subpixel_blend = wgpu::BlendState {
+ color: wgpu::BlendComponent {
+ src_factor: wgpu::BlendFactor::Src1,
+ dst_factor: wgpu::BlendFactor::OneMinusSrc1,
+ operation: wgpu::BlendOperation::Add,
+ },
+ alpha: wgpu::BlendComponent {
+ src_factor: wgpu::BlendFactor::One,
+ dst_factor: wgpu::BlendFactor::OneMinusSrcAlpha,
+ operation: wgpu::BlendOperation::Add,
+ },
+ };
+
+ Some(create_pipeline(
+ "subpixel_sprites",
+ "vs_subpixel_sprite",
+ "fs_subpixel_sprite",
+ &layouts.globals,
+ &layouts.instances_with_texture,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(wgpu::ColorTargetState {
+ format: surface_format,
+ blend: Some(subpixel_blend),
+ write_mask: wgpu::ColorWrites::COLOR,
+ })],
+ 1,
+ ))
+ } else {
+ None
+ };
+
+ let poly_sprites = create_pipeline(
+ "poly_sprites",
+ "vs_poly_sprite",
+ "fs_poly_sprite",
+ &layouts.globals,
+ &layouts.instances_with_texture,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(color_target.clone())],
+ 1,
+ );
+
+ let surfaces = create_pipeline(
+ "surfaces",
+ "vs_surface",
+ "fs_surface",
+ &layouts.globals,
+ &layouts.surfaces,
+ wgpu::PrimitiveTopology::TriangleStrip,
+ &[Some(color_target)],
+ 1,
+ );
+
+ WgpuPipelines {
+ quads,
+ shadows,
+ path_rasterization,
+ paths,
+ underlines,
+ mono_sprites,
+ subpixel_sprites,
+ poly_sprites,
+ surfaces,
+ }
+ }
+
+ fn create_path_intermediate(
+ device: &wgpu::Device,
+ format: wgpu::TextureFormat,
+ width: u32,
+ height: u32,
+ ) -> (wgpu::Texture, wgpu::TextureView) {
+ let texture = device.create_texture(&wgpu::TextureDescriptor {
+ label: Some("path_intermediate"),
+ size: wgpu::Extent3d {
+ width: width.max(1),
+ height: height.max(1),
+ depth_or_array_layers: 1,
+ },
+ mip_level_count: 1,
+ sample_count: 1,
+ dimension: wgpu::TextureDimension::D2,
+ format,
+ usage: wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::TEXTURE_BINDING,
+ view_formats: &[],
+ });
+ let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
+ (texture, view)
+ }
+
+ fn create_msaa_if_needed(
+ device: &wgpu::Device,
+ format: wgpu::TextureFormat,
+ width: u32,
+ height: u32,
+ sample_count: u32,
+ ) -> Option<(wgpu::Texture, wgpu::TextureView)> {
+ if sample_count <= 1 {
+ return None;
+ }
+ let texture = device.create_texture(&wgpu::TextureDescriptor {
+ label: Some("path_msaa"),
+ size: wgpu::Extent3d {
+ width: width.max(1),
+ height: height.max(1),
+ depth_or_array_layers: 1,
+ },
+ mip_level_count: 1,
+ sample_count,
+ dimension: wgpu::TextureDimension::D2,
+ format,
+ usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
+ view_formats: &[],
+ });
+ let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
+ Some((texture, view))
+ }
+
+ pub fn update_drawable_size(&mut self, size: Size<DevicePixels>) {
+ let width = size.width.0 as u32;
+ let height = size.height.0 as u32;
+
+ if width != self.surface_config.width || height != self.surface_config.height {
+ self.surface_config.width = width.max(1);
+ self.surface_config.height = height.max(1);
+ self.surface.configure(&self.device, &self.surface_config);
+
+ let (path_intermediate_texture, path_intermediate_view) =
+ Self::create_path_intermediate(
+ &self.device,
+ self.surface_config.format,
+ self.surface_config.width,
+ self.surface_config.height,
+ );
+ self.path_intermediate_texture = path_intermediate_texture;
+ self.path_intermediate_view = path_intermediate_view;
+
+ let (path_msaa_texture, path_msaa_view) = Self::create_msaa_if_needed(
+ &self.device,
+ self.surface_config.format,
+ self.surface_config.width,
+ self.surface_config.height,
+ self.rendering_params.path_sample_count,
+ )
+ .map(|(t, v)| (Some(t), Some(v)))
+ .unwrap_or((None, None));
+ self.path_msaa_texture = path_msaa_texture;
+ self.path_msaa_view = path_msaa_view;
+ }
+ }
+
+ pub fn update_transparency(&mut self, transparent: bool) {
+ let new_alpha_mode = if transparent {
+ self.transparent_alpha_mode
+ } else {
+ self.opaque_alpha_mode
+ };
+
+ if new_alpha_mode != self.surface_config.alpha_mode {
+ self.surface_config.alpha_mode = new_alpha_mode;
+ self.surface.configure(&self.device, &self.surface_config);
+ self.pipelines = Self::create_pipelines(
+ &self.device,
+ &self.bind_group_layouts,
+ self.surface_config.format,
+ self.surface_config.alpha_mode,
+ self.rendering_params.path_sample_count,
+ self.dual_source_blending,
+ );
+ }
+ }
+
+ #[allow(dead_code)]
+ pub fn viewport_size(&self) -> Size<DevicePixels> {
+ Size {
+ width: DevicePixels(self.surface_config.width as i32),
+ height: DevicePixels(self.surface_config.height as i32),
+ }
+ }
+
+ pub fn sprite_atlas(&self) -> &Arc<WgpuAtlas> {
+ &self.atlas
+ }
+
+ pub fn gpu_specs(&self) -> GpuSpecs {
+ GpuSpecs {
+ is_software_emulated: self.adapter_info.device_type == wgpu::DeviceType::Cpu,
+ device_name: self.adapter_info.name.clone(),
+ driver_name: self.adapter_info.driver.clone(),
+ driver_info: self.adapter_info.driver_info.clone(),
+ }
+ }
+
+ pub fn draw(&mut self, scene: &Scene) {
+ self.atlas.before_frame();
+
+ let frame = match self.surface.get_current_texture() {
+ Ok(frame) => frame,
+ Err(wgpu::SurfaceError::Lost | wgpu::SurfaceError::Outdated) => {
+ self.surface.configure(&self.device, &self.surface_config);
+ return;
+ }
+ Err(e) => {
+ log::error!("Failed to acquire surface texture: {e}");
+ return;
+ }
+ };
+ let frame_view = frame
+ .texture
+ .create_view(&wgpu::TextureViewDescriptor::default());
+
+ let gamma_params = GammaParams {
+ gamma_ratios: self.rendering_params.gamma_ratios,
+ grayscale_enhanced_contrast: self.rendering_params.grayscale_enhanced_contrast,
+ subpixel_enhanced_contrast: self.rendering_params.subpixel_enhanced_contrast,
+ _pad: [0.0; 2],
+ };
+
+ let globals = GlobalParams {
+ viewport_size: [
+ self.surface_config.width as f32,
+ self.surface_config.height as f32,
+ ],
+ premultiplied_alpha: if self.surface_config.alpha_mode
+ == wgpu::CompositeAlphaMode::PreMultiplied
+ {
+ 1
+ } else {
+ 0
+ },
+ pad: 0,
+ };
+
+ let path_globals = GlobalParams {
+ premultiplied_alpha: 0,
+ ..globals
+ };
+
+ self.queue
+ .write_buffer(&self.globals_buffer, 0, bytemuck::bytes_of(&globals));
+ self.queue.write_buffer(
+ &self.globals_buffer,
+ self.path_globals_offset,
+ bytemuck::bytes_of(&path_globals),
+ );
+ self.queue.write_buffer(
+ &self.globals_buffer,
+ self.gamma_offset,
+ bytemuck::bytes_of(&gamma_params),
+ );
+
+ loop {
+ let mut instance_offset: u64 = 0;
+ let mut overflow = false;
+
+ let mut encoder = self
+ .device
+ .create_command_encoder(&wgpu::CommandEncoderDescriptor {
+ label: Some("main_encoder"),
+ });
+
+ {
+ let mut pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
+ label: Some("main_pass"),
+ color_attachments: &[Some(wgpu::RenderPassColorAttachment {
+ view: &frame_view,
+ resolve_target: None,
+ ops: wgpu::Operations {
+ load: wgpu::LoadOp::Clear(wgpu::Color::TRANSPARENT),
+ store: wgpu::StoreOp::Store,
+ },
+ depth_slice: None,
+ })],
+ depth_stencil_attachment: None,
+ ..Default::default()
+ });
+
+ for batch in scene.batches() {
+ let ok = match batch {
+ PrimitiveBatch::Quads(range) => {
+ self.draw_quads(&scene.quads[range], &mut instance_offset, &mut pass)
+ }
+ PrimitiveBatch::Shadows(range) => self.draw_shadows(
+ &scene.shadows[range],
+ &mut instance_offset,
+ &mut pass,
+ ),
+ PrimitiveBatch::Paths(range) => {
+ let paths = &scene.paths[range];
+ if paths.is_empty() {
+ continue;
+ }
+
+ drop(pass);
+
+ let did_draw = self.draw_paths_to_intermediate(
+ &mut encoder,
+ paths,
+ &mut instance_offset,
+ );
+
+ pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
+ label: Some("main_pass_continued"),
+ color_attachments: &[Some(wgpu::RenderPassColorAttachment {
+ view: &frame_view,
+ resolve_target: None,
+ ops: wgpu::Operations {
+ load: wgpu::LoadOp::Load,
+ store: wgpu::StoreOp::Store,
+ },
+ depth_slice: None,
+ })],
+ depth_stencil_attachment: None,
+ ..Default::default()
+ });
+
+ if did_draw {
+ self.draw_paths_from_intermediate(
+ paths,
+ &mut instance_offset,
+ &mut pass,
+ )
+ } else {
+ false
+ }
+ }
+ PrimitiveBatch::Underlines(range) => self.draw_underlines(
+ &scene.underlines[range],
+ &mut instance_offset,
+ &mut pass,
+ ),
+ PrimitiveBatch::MonochromeSprites { texture_id, range } => self
+ .draw_monochrome_sprites(
+ &scene.monochrome_sprites[range],
+ texture_id,
+ &mut instance_offset,
+ &mut pass,
+ ),
+ PrimitiveBatch::SubpixelSprites { texture_id, range } => self
+ .draw_subpixel_sprites(
+ &scene.subpixel_sprites[range],
+ texture_id,
+ &mut instance_offset,
+ &mut pass,
+ ),
+ PrimitiveBatch::PolychromeSprites { texture_id, range } => self
+ .draw_polychrome_sprites(
+ &scene.polychrome_sprites[range],
+ texture_id,
+ &mut instance_offset,
+ &mut pass,
+ ),
+ PrimitiveBatch::Surfaces(_surfaces) => {
+ // Surfaces are macOS-only for video playback
+ // Not implemented for Linux/wgpu
+ true
+ }
+ };
+ if !ok {
+ overflow = true;
+ break;
+ }
+ }
+ }
+
+ if overflow {
+ drop(encoder);
+ if self.instance_buffer_capacity >= 256 * 1024 * 1024 {
+ log::error!(
+ "instance buffer size grew too large: {}",
+ self.instance_buffer_capacity
+ );
+ frame.present();
+ return;
+ }
+ self.grow_instance_buffer();
+ continue;
+ }
+
+ self.queue.submit(std::iter::once(encoder.finish()));
+ frame.present();
+ return;
+ }
+ }
+
+ fn draw_quads(
+ &self,
+ quads: &[Quad],
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let data = unsafe { Self::instance_bytes(quads) };
+ self.draw_instances(
+ data,
+ quads.len() as u32,
+ &self.pipelines.quads,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_shadows(
+ &self,
+ shadows: &[Shadow],
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let data = unsafe { Self::instance_bytes(shadows) };
+ self.draw_instances(
+ data,
+ shadows.len() as u32,
+ &self.pipelines.shadows,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_underlines(
+ &self,
+ underlines: &[Underline],
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let data = unsafe { Self::instance_bytes(underlines) };
+ self.draw_instances(
+ data,
+ underlines.len() as u32,
+ &self.pipelines.underlines,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_monochrome_sprites(
+ &self,
+ sprites: &[MonochromeSprite],
+ texture_id: AtlasTextureId,
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let tex_info = self.atlas.get_texture_info(texture_id);
+ let data = unsafe { Self::instance_bytes(sprites) };
+ self.draw_instances_with_texture(
+ data,
+ sprites.len() as u32,
+ &tex_info.view,
+ &self.pipelines.mono_sprites,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_subpixel_sprites(
+ &self,
+ sprites: &[SubpixelSprite],
+ texture_id: AtlasTextureId,
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let tex_info = self.atlas.get_texture_info(texture_id);
+ let data = unsafe { Self::instance_bytes(sprites) };
+ let pipeline = self
+ .pipelines
+ .subpixel_sprites
+ .as_ref()
+ .unwrap_or(&self.pipelines.mono_sprites);
+ self.draw_instances_with_texture(
+ data,
+ sprites.len() as u32,
+ &tex_info.view,
+ pipeline,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_polychrome_sprites(
+ &self,
+ sprites: &[PolychromeSprite],
+ texture_id: AtlasTextureId,
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let tex_info = self.atlas.get_texture_info(texture_id);
+ let data = unsafe { Self::instance_bytes(sprites) };
+ self.draw_instances_with_texture(
+ data,
+ sprites.len() as u32,
+ &tex_info.view,
+ &self.pipelines.poly_sprites,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_instances(
+ &self,
+ data: &[u8],
+ instance_count: u32,
+ pipeline: &wgpu::RenderPipeline,
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ if instance_count == 0 {
+ return true;
+ }
+ let Some((offset, size)) = self.write_to_instance_buffer(instance_offset, data) else {
+ return false;
+ };
+ let bind_group = self.device.create_bind_group(&wgpu::BindGroupDescriptor {
+ label: None,
+ layout: &self.bind_group_layouts.instances,
+ entries: &[wgpu::BindGroupEntry {
+ binding: 0,
+ resource: self.instance_binding(offset, size),
+ }],
+ });
+ pass.set_pipeline(pipeline);
+ pass.set_bind_group(0, &self.globals_bind_group, &[]);
+ pass.set_bind_group(1, &bind_group, &[]);
+ pass.draw(0..4, 0..instance_count);
+ true
+ }
+
+ fn draw_instances_with_texture(
+ &self,
+ data: &[u8],
+ instance_count: u32,
+ texture_view: &wgpu::TextureView,
+ pipeline: &wgpu::RenderPipeline,
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ if instance_count == 0 {
+ return true;
+ }
+ let Some((offset, size)) = self.write_to_instance_buffer(instance_offset, data) else {
+ return false;
+ };
+ let bind_group = self.device.create_bind_group(&wgpu::BindGroupDescriptor {
+ label: None,
+ layout: &self.bind_group_layouts.instances_with_texture,
+ entries: &[
+ wgpu::BindGroupEntry {
+ binding: 0,
+ resource: self.instance_binding(offset, size),
+ },
+ wgpu::BindGroupEntry {
+ binding: 1,
+ resource: wgpu::BindingResource::TextureView(texture_view),
+ },
+ wgpu::BindGroupEntry {
+ binding: 2,
+ resource: wgpu::BindingResource::Sampler(&self.atlas_sampler),
+ },
+ ],
+ });
+ pass.set_pipeline(pipeline);
+ pass.set_bind_group(0, &self.globals_bind_group, &[]);
+ pass.set_bind_group(1, &bind_group, &[]);
+ pass.draw(0..4, 0..instance_count);
+ true
+ }
+
+ unsafe fn instance_bytes<T>(instances: &[T]) -> &[u8] {
+ unsafe {
+ std::slice::from_raw_parts(
+ instances.as_ptr() as *const u8,
+ std::mem::size_of_val(instances),
+ )
+ }
+ }
+
+ fn draw_paths_from_intermediate(
+ &self,
+ paths: &[Path<ScaledPixels>],
+ instance_offset: &mut u64,
+ pass: &mut wgpu::RenderPass<'_>,
+ ) -> bool {
+ let first_path = &paths[0];
+ let sprites: Vec<PathSprite> = if paths.last().map(|p| &p.order) == Some(&first_path.order)
+ {
+ paths
+ .iter()
+ .map(|p| PathSprite {
+ bounds: p.clipped_bounds(),
+ })
+ .collect()
+ } else {
+ let mut bounds = first_path.clipped_bounds();
+ for path in paths.iter().skip(1) {
+ bounds = bounds.union(&path.clipped_bounds());
+ }
+ vec![PathSprite { bounds }]
+ };
+
+ let sprite_data = unsafe { Self::instance_bytes(&sprites) };
+ self.draw_instances_with_texture(
+ sprite_data,
+ sprites.len() as u32,
+ &self.path_intermediate_view,
+ &self.pipelines.paths,
+ instance_offset,
+ pass,
+ )
+ }
+
+ fn draw_paths_to_intermediate(
+ &self,
+ encoder: &mut wgpu::CommandEncoder,
+ paths: &[Path<ScaledPixels>],
+ instance_offset: &mut u64,
+ ) -> bool {
+ let mut vertices = Vec::new();
+ for path in paths {
+ let bounds = path.clipped_bounds();
+ vertices.extend(path.vertices.iter().map(|v| PathRasterizationVertex {
+ xy_position: v.xy_position,
+ st_position: v.st_position,
+ color: path.color,
+ bounds,
+ }));
+ }
+
+ if vertices.is_empty() {
+ return true;
+ }
+
+ let vertex_data = unsafe { Self::instance_bytes(&vertices) };
+ let Some((vertex_offset, vertex_size)) =
+ self.write_to_instance_buffer(instance_offset, vertex_data)
+ else {
+ return false;
+ };
+
+ let data_bind_group = self.device.create_bind_group(&wgpu::BindGroupDescriptor {
+ label: Some("path_rasterization_bind_group"),
+ layout: &self.bind_group_layouts.instances,
+ entries: &[wgpu::BindGroupEntry {
+ binding: 0,
+ resource: self.instance_binding(vertex_offset, vertex_size),
+ }],
+ });
+
+ let (target_view, resolve_target) = if let Some(ref msaa_view) = self.path_msaa_view {
+ (msaa_view, Some(&self.path_intermediate_view))
+ } else {
+ (&self.path_intermediate_view, None)
+ };
+
+ {
+ let mut pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
+ label: Some("path_rasterization_pass"),
+ color_attachments: &[Some(wgpu::RenderPassColorAttachment {
+ view: target_view,
+ resolve_target,
+ ops: wgpu::Operations {
+ load: wgpu::LoadOp::Clear(wgpu::Color::TRANSPARENT),
+ store: wgpu::StoreOp::Store,
+ },
+ depth_slice: None,
+ })],
+ depth_stencil_attachment: None,
+ ..Default::default()
+ });
+
+ pass.set_pipeline(&self.pipelines.path_rasterization);
+ pass.set_bind_group(0, &self.path_globals_bind_group, &[]);
+ pass.set_bind_group(1, &data_bind_group, &[]);
+ pass.draw(0..vertices.len() as u32, 0..1);
+ }
+
+ true
+ }
+
+ fn grow_instance_buffer(&mut self) {
+ let new_capacity = self.instance_buffer_capacity * 2;
+ log::info!("increased instance buffer size to {}", new_capacity);
+ self.instance_buffer = self.device.create_buffer(&wgpu::BufferDescriptor {
+ label: Some("instance_buffer"),
+ size: new_capacity,
+ usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_DST,
+ mapped_at_creation: false,
+ });
+ self.instance_buffer_capacity = new_capacity;
+ }
+
+ fn write_to_instance_buffer(
+ &self,
+ instance_offset: &mut u64,
+ data: &[u8],
+ ) -> Option<(u64, NonZeroU64)> {
+ let offset = (*instance_offset).next_multiple_of(self.storage_buffer_alignment);
+ let size = (data.len() as u64).max(16);
+ if offset + size > self.instance_buffer_capacity {
+ return None;
+ }
+ self.queue.write_buffer(&self.instance_buffer, offset, data);
+ *instance_offset = offset + size;
+ Some((offset, NonZeroU64::new(size).expect("size is at least 16")))
+ }
+
+ fn instance_binding(&self, offset: u64, size: NonZeroU64) -> wgpu::BindingResource<'_> {
+ wgpu::BindingResource::Buffer(wgpu::BufferBinding {
+ buffer: &self.instance_buffer,
+ offset,
+ size: Some(size),
+ })
+ }
+
+ pub fn destroy(&mut self) {
+ // wgpu resources are automatically cleaned up when dropped
+ }
+}
+
+struct RenderingParameters {
+ path_sample_count: u32,
+ gamma_ratios: [f32; 4],
+ grayscale_enhanced_contrast: f32,
+ subpixel_enhanced_contrast: f32,
+}
+
+impl RenderingParameters {
+ fn new(adapter: &wgpu::Adapter, surface_format: wgpu::TextureFormat) -> Self {
+ use std::env;
+
+ let format_features = adapter.get_texture_format_features(surface_format);
+ let path_sample_count = [4, 2, 1]
+ .into_iter()
+ .find(|&n| format_features.flags.sample_count_supported(n))
+ .unwrap_or(1);
+
+ let gamma = env::var("ZED_FONTS_GAMMA")
+ .ok()
+ .and_then(|v| v.parse().ok())
+ .unwrap_or(1.8_f32)
+ .clamp(1.0, 2.2);
+ let gamma_ratios = get_gamma_correction_ratios(gamma);
+
+ let grayscale_enhanced_contrast = env::var("ZED_FONTS_GRAYSCALE_ENHANCED_CONTRAST")
+ .ok()
+ .and_then(|v| v.parse().ok())
+ .unwrap_or(1.0_f32)
+ .max(0.0);
+
+ let subpixel_enhanced_contrast = env::var("ZED_FONTS_SUBPIXEL_ENHANCED_CONTRAST")
+ .ok()
+ .and_then(|v| v.parse().ok())
+ .unwrap_or(0.5_f32)
+ .max(0.0);
+
+ Self {
+ path_sample_count,
+ gamma_ratios,
+ grayscale_enhanced_contrast,
+ subpixel_enhanced_contrast,
+ }
+ }
+}
@@ -205,6 +205,23 @@ impl TextSystem {
Ok(result * font_size)
}
+ // Consider removing this?
+ /// Returns the shaped layout width of for the given character, in the given font and size.
+ pub fn layout_width(&self, font_id: FontId, font_size: Pixels, ch: char) -> Pixels {
+ let mut buffer = [0; 4];
+ let buffer = ch.encode_utf8(&mut buffer);
+ self.platform_text_system
+ .layout_line(
+ buffer,
+ font_size,
+ &[FontRun {
+ len: buffer.len(),
+ font_id,
+ }],
+ )
+ .width
+ }
+
/// Returns the width of an `em`.
///
/// Uses the width of the `m` character in the given font and size.
@@ -219,6 +236,12 @@ impl TextSystem {
Ok(self.advance(font_id, font_size, 'm')?.width)
}
+ // Consider removing this?
+ /// Returns the shaped layout width of an `em`.
+ pub fn em_layout_width(&self, font_id: FontId, font_size: Pixels) -> Pixels {
+ self.layout_width(font_id, font_size, 'm')
+ }
+
/// Returns the width of an `ch`.
///
/// Uses the width of the `0` character in the given font and size.
@@ -295,9 +318,9 @@ impl TextSystem {
let wrappers = lock
.entry(FontIdWithSize { font_id, font_size })
.or_default();
- let wrapper = wrappers.pop().unwrap_or_else(|| {
- LineWrapper::new(font_id, font_size, self.platform_text_system.clone())
- });
+ let wrapper = wrappers
+ .pop()
+ .unwrap_or_else(|| LineWrapper::new(font_id, font_size, self.clone()));
LineWrapperHandle {
wrapper: Some(wrapper),
@@ -1,4 +1,4 @@
-use crate::{FontId, FontRun, Pixels, PlatformTextSystem, SharedString, TextRun, px};
+use crate::{FontId, Pixels, SharedString, TextRun, TextSystem, px};
use collections::HashMap;
use std::{borrow::Cow, iter, sync::Arc};
@@ -13,7 +13,7 @@ pub enum TruncateFrom {
/// The GPUI line wrapper, used to wrap lines of text to a given width.
pub struct LineWrapper {
- platform_text_system: Arc<dyn PlatformTextSystem>,
+ text_system: Arc<TextSystem>,
pub(crate) font_id: FontId,
pub(crate) font_size: Pixels,
cached_ascii_char_widths: [Option<Pixels>; 128],
@@ -24,13 +24,9 @@ impl LineWrapper {
/// The maximum indent that can be applied to a line.
pub const MAX_INDENT: u32 = 256;
- pub(crate) fn new(
- font_id: FontId,
- font_size: Pixels,
- text_system: Arc<dyn PlatformTextSystem>,
- ) -> Self {
+ pub(crate) fn new(font_id: FontId, font_size: Pixels, text_system: Arc<TextSystem>) -> Self {
Self {
- platform_text_system: text_system,
+ text_system,
font_id,
font_size,
cached_ascii_char_widths: [None; 128],
@@ -254,33 +250,22 @@ impl LineWrapper {
if let Some(cached_width) = self.cached_ascii_char_widths[c as usize] {
cached_width
} else {
- let width = self.compute_width_for_char(c);
+ let width = self
+ .text_system
+ .layout_width(self.font_id, self.font_size, c);
self.cached_ascii_char_widths[c as usize] = Some(width);
width
}
} else if let Some(cached_width) = self.cached_other_char_widths.get(&c) {
*cached_width
} else {
- let width = self.compute_width_for_char(c);
+ let width = self
+ .text_system
+ .layout_width(self.font_id, self.font_size, c);
self.cached_other_char_widths.insert(c, width);
width
}
}
-
- fn compute_width_for_char(&self, c: char) -> Pixels {
- let mut buffer = [0; 4];
- let buffer = c.encode_utf8(&mut buffer);
- self.platform_text_system
- .layout_line(
- buffer,
- self.font_size,
- &[FontRun {
- len: buffer.len(),
- font_id: self.font_id,
- }],
- )
- .width
- }
}
fn update_runs_after_truncation(
@@ -401,7 +386,7 @@ mod tests {
let dispatcher = TestDispatcher::new(0);
let cx = TestAppContext::build(dispatcher, None);
let id = cx.text_system().resolve_font(&font(".ZedMono"));
- LineWrapper::new(id, px(16.), cx.text_system().platform_text_system.clone())
+ LineWrapper::new(id, px(16.), cx.text_system().clone())
}
fn generate_test_runs(input_run_len: &[usize]) -> Vec<TextRun> {
@@ -134,6 +134,7 @@ pub enum IconName {
FontSize,
FontWeight,
ForwardArrow,
+ ForwardArrowUp,
GenericClose,
GenericMaximize,
GenericMinimize,
@@ -3,7 +3,10 @@ use std::sync::{Arc, LazyLock};
use anyhow::{Context as _, Result};
use collections::HashMap;
use gpui::{App, AsyncApp, BorrowAppContext as _, Entity, Task, WeakEntity};
-use language::{LanguageRegistry, LspAdapterDelegate, language_settings::AllLanguageSettings};
+use language::{
+ LanguageRegistry, LanguageServerName, LspAdapterDelegate,
+ language_settings::AllLanguageSettings,
+};
use parking_lot::RwLock;
use project::{LspStore, lsp_store::LocalLspAdapterDelegate};
use settings::{LSP_SETTINGS_SCHEMA_URL_PREFIX, Settings as _, SettingsLocation};
@@ -244,6 +247,9 @@ async fn resolve_dynamic_schema(
.all_lsp_adapters()
.into_iter()
.find(|adapter| adapter.name().as_ref() as &str == lsp_name)
+ .or_else(|| {
+ languages.load_available_lsp_adapter(&LanguageServerName::from(lsp_name))
+ })
.with_context(|| format!("LSP adapter not found: {}", lsp_name))?;
let delegate: Arc<dyn LspAdapterDelegate> = cx
@@ -281,11 +287,26 @@ async fn resolve_dynamic_schema(
})
}
"settings" => {
- let lsp_adapter_names = languages
+ let mut lsp_adapter_names: Vec<String> = languages
.all_lsp_adapters()
.into_iter()
- .map(|adapter| adapter.name().to_string())
- .collect::<Vec<_>>();
+ .map(|adapter| adapter.name())
+ .chain(languages.available_lsp_adapter_names().into_iter())
+ .map(|name| name.to_string())
+ .collect();
+
+ let mut i = 0;
+ while i < lsp_adapter_names.len() {
+ let mut j = i + 1;
+ while j < lsp_adapter_names.len() {
+ if lsp_adapter_names[i] == lsp_adapter_names[j] {
+ lsp_adapter_names.swap_remove(j);
+ } else {
+ j += 1;
+ }
+ }
+ i += 1;
+ }
cx.update(|cx| {
let font_names = &cx.text_system().all_font_names();
@@ -1716,28 +1716,14 @@ impl Buffer {
/// Returns the [`Language`] at the given location.
pub fn language_at<D: ToOffset>(&self, position: D) -> Option<Arc<Language>> {
let offset = position.to_offset(self);
- let mut is_first = true;
- let start_anchor = self.anchor_before(offset);
- let end_anchor = self.anchor_after(offset);
+ let text: &TextBufferSnapshot = &self.text;
self.syntax_map
.lock()
- .layers_for_range(offset..offset, &self.text, false)
+ .layers_for_range(offset..offset, text, false)
.filter(|layer| {
- if is_first {
- is_first = false;
- return true;
- }
-
layer
.included_sub_ranges
- .map(|sub_ranges| {
- sub_ranges.iter().any(|sub_range| {
- let is_before_start = sub_range.end.cmp(&start_anchor, self).is_lt();
- let is_after_end = sub_range.start.cmp(&end_anchor, self).is_gt();
- !is_before_start && !is_after_end
- })
- })
- .unwrap_or(true)
+ .is_none_or(|ranges| offset_in_sub_ranges(ranges, offset, text))
})
.last()
.map(|info| info.language.clone())
@@ -1747,10 +1733,17 @@ impl Buffer {
/// Returns each [`Language`] for the active syntax layers at the given location.
pub fn languages_at<D: ToOffset>(&self, position: D) -> Vec<Arc<Language>> {
let offset = position.to_offset(self);
+ let text: &TextBufferSnapshot = &self.text;
let mut languages: Vec<Arc<Language>> = self
.syntax_map
.lock()
- .layers_for_range(offset..offset, &self.text, false)
+ .layers_for_range(offset..offset, text, false)
+ .filter(|layer| {
+ // For combined injections, check if offset is within the actual sub-ranges.
+ layer
+ .included_sub_ranges
+ .is_none_or(|ranges| offset_in_sub_ranges(ranges, offset, text))
+ })
.map(|info| info.language.clone())
.collect();
@@ -3340,6 +3333,21 @@ impl Buffer {
impl EventEmitter<BufferEvent> for Buffer {}
+fn offset_in_sub_ranges(
+ sub_ranges: &[Range<Anchor>],
+ offset: usize,
+ snapshot: &TextBufferSnapshot,
+) -> bool {
+ let start_anchor = snapshot.anchor_before(offset);
+ let end_anchor = snapshot.anchor_after(offset);
+
+ sub_ranges.iter().any(|sub_range| {
+ let is_before_start = sub_range.end.cmp(&start_anchor, snapshot).is_lt();
+ let is_after_end = sub_range.start.cmp(&end_anchor, snapshot).is_gt();
+ !is_before_start && !is_after_end
+ })
+}
+
impl Deref for Buffer {
type Target = TextBuffer;
@@ -3854,12 +3862,19 @@ impl BufferSnapshot {
let offset = position.to_offset(self);
let mut scope = None;
let mut smallest_range_and_depth: Option<(Range<usize>, usize)> = None;
+ let text: &TextBufferSnapshot = self;
// Use the layer that has the smallest node intersecting the given point.
for layer in self
.syntax
.layers_for_range(offset..offset, &self.text, false)
{
+ if let Some(ranges) = layer.included_sub_ranges
+ && !offset_in_sub_ranges(ranges, offset, text)
+ {
+ continue;
+ }
+
let mut cursor = layer.node().walk();
let mut range = None;
@@ -2771,14 +2771,11 @@ fn test_language_scope_at_with_combined_injections(cx: &mut App) {
let mut buffer = Buffer::local(text, cx);
buffer.set_language_registry(language_registry.clone());
- buffer.set_language(
- language_registry
- .language_for_name("HTML+ERB")
- .now_or_never()
- .unwrap()
- .ok(),
- cx,
- );
+ let language = language_registry
+ .language_for_name("HTML+ERB")
+ .now_or_never()
+ .and_then(Result::ok);
+ buffer.set_language(language, cx);
let snapshot = buffer.snapshot();
let html_config = snapshot.language_scope_at(Point::new(2, 4)).unwrap();
@@ -2894,15 +2891,80 @@ fn test_language_at_for_markdown_code_block(cx: &mut App) {
}
#[gpui::test]
-fn test_syntax_layer_at_for_injected_languages(cx: &mut App) {
+fn test_syntax_layer_at_for_combined_injections(cx: &mut App) {
init_settings(cx, |_| {});
cx.new(|cx| {
+ // ERB template with HTML and Ruby content
let text = r#"
- ```html+erb
- <div>Hello</div>
- <%= link_to "Some", "https://zed.dev" %>
- ```
+<div>Hello</div>
+<%= link_to "Click", url %>
+<p>World</p>
+ "#
+ .unindent();
+
+ let language_registry = Arc::new(LanguageRegistry::test(cx.background_executor().clone()));
+ language_registry.add(Arc::new(erb_lang()));
+ language_registry.add(Arc::new(html_lang()));
+ language_registry.add(Arc::new(ruby_lang()));
+
+ let mut buffer = Buffer::local(text, cx);
+ buffer.set_language_registry(language_registry.clone());
+ let language = language_registry
+ .language_for_name("HTML+ERB")
+ .now_or_never()
+ .and_then(Result::ok);
+ buffer.set_language(language, cx);
+
+ let snapshot = buffer.snapshot();
+
+ // Test language_at for HTML content (line 0: "<div>Hello</div>")
+ let html_point = Point::new(0, 4);
+ let language = snapshot.language_at(html_point).unwrap();
+ assert_eq!(
+ language.name().as_ref(),
+ "HTML",
+ "Expected HTML at {:?}, got {}",
+ html_point,
+ language.name()
+ );
+
+ // Test language_at for Ruby code (line 1: "<%= link_to ... %>")
+ let ruby_point = Point::new(1, 6);
+ let language = snapshot.language_at(ruby_point).unwrap();
+ assert_eq!(
+ language.name().as_ref(),
+ "Ruby",
+ "Expected Ruby at {:?}, got {}",
+ ruby_point,
+ language.name()
+ );
+
+ // Test language_at for HTML after Ruby (line 2: "<p>World</p>")
+ let html_after_ruby = Point::new(2, 2);
+ let language = snapshot.language_at(html_after_ruby).unwrap();
+ assert_eq!(
+ language.name().as_ref(),
+ "HTML",
+ "Expected HTML at {:?}, got {}",
+ html_after_ruby,
+ language.name()
+ );
+
+ buffer
+ });
+}
+
+#[gpui::test]
+fn test_languages_at_for_combined_injections(cx: &mut App) {
+ init_settings(cx, |_| {});
+
+ cx.new(|cx| {
+ // ERB template with HTML and Ruby content
+ let text = r#"
+<div>Hello</div>
+<%= yield %>
+<p>World</p>
"#
.unindent();
@@ -2922,16 +2984,47 @@ fn test_syntax_layer_at_for_injected_languages(cx: &mut App) {
cx,
);
- let snapshot = buffer.snapshot();
-
- // Test points in the code line
- let html_point = Point::new(1, 4);
- let language = snapshot.language_at(html_point).unwrap();
- assert_eq!(language.name().as_ref(), "HTML");
+ // Test languages_at for HTML content - should NOT include Ruby
+ let html_point = Point::new(0, 4);
+ let languages = buffer.languages_at(html_point);
+ let language_names: Vec<_> = languages.iter().map(|language| language.name()).collect();
+ assert!(
+ language_names
+ .iter()
+ .any(|language_name| language_name.as_ref() == "HTML"),
+ "Expected HTML in languages at {:?}, got {:?}",
+ html_point,
+ language_names
+ );
+ assert!(
+ !language_names
+ .iter()
+ .any(|language_name| language_name.as_ref() == "Ruby"),
+ "Did not expect Ruby in languages at {:?}, got {:?}",
+ html_point,
+ language_names
+ );
- let ruby_point = Point::new(2, 6);
- let language = snapshot.language_at(ruby_point).unwrap();
- assert_eq!(language.name().as_ref(), "Ruby");
+ // Test languages_at for Ruby code - should NOT include HTML
+ let ruby_point = Point::new(1, 6);
+ let languages = buffer.languages_at(ruby_point);
+ let language_names: Vec<_> = languages.iter().map(|language| language.name()).collect();
+ assert!(
+ language_names
+ .iter()
+ .any(|language_name| language_name.as_ref() == "Ruby"),
+ "Expected Ruby in languages at {:?}, got {:?}",
+ ruby_point,
+ language_names
+ );
+ assert!(
+ !language_names
+ .iter()
+ .any(|language_name| language_name.as_ref() == "HTML"),
+ "Did not expect HTML in languages at {:?}, got {:?}",
+ ruby_point,
+ language_names
+ );
buffer
});
@@ -414,6 +414,17 @@ impl LanguageRegistry {
state.available_lsp_adapters.contains_key(name)
}
+ /// Returns the names of all available LSP adapters (registered via `register_available_lsp_adapter`).
+ /// These are adapters that are not bound to a specific language but can be enabled via settings.
+ pub fn available_lsp_adapter_names(&self) -> Vec<LanguageServerName> {
+ self.state
+ .read()
+ .available_lsp_adapters
+ .keys()
+ .cloned()
+ .collect()
+ }
+
pub fn register_lsp_adapter(&self, language_name: LanguageName, adapter: Arc<dyn LspAdapter>) {
let mut state = self.state.write();
@@ -18,6 +18,7 @@ anyhow.workspace = true
aws-config = { workspace = true, features = ["behavior-version-latest"] }
aws-credential-types = { workspace = true, features = ["hardcoded-credentials"] }
aws_http_client.workspace = true
+base64.workspace = true
bedrock = { workspace = true, features = ["schemars"] }
chrono.workspace = true
client.workspace = true
@@ -10,5 +10,6 @@ pub mod ollama;
pub mod open_ai;
pub mod open_ai_compatible;
pub mod open_router;
+mod util;
pub mod vercel;
pub mod x_ai;
@@ -24,6 +24,8 @@ use ui::{ButtonLink, ConfiguredApiCard, List, ListBulletItem, prelude::*};
use ui_input::InputField;
use util::ResultExt;
+use crate::provider::util::parse_tool_arguments;
+
pub use settings::AnthropicAvailableModel as AvailableModel;
const PROVIDER_ID: LanguageModelProviderId = language_model::ANTHROPIC_PROVIDER_ID;
@@ -829,12 +831,7 @@ impl AnthropicEventMapper {
Event::ContentBlockStop { index } => {
if let Some(tool_use) = self.tool_uses_by_index.remove(&index) {
let input_json = tool_use.input_json.trim();
- let input_value = if input_json.is_empty() {
- Ok(serde_json::Value::Object(serde_json::Map::default()))
- } else {
- serde_json::Value::from_str(input_json)
- };
- let event_result = match input_value {
+ let event_result = match parse_tool_arguments(input_json) {
Ok(input) => Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_use.id.into(),
@@ -1,5 +1,4 @@
use std::pin::Pin;
-use std::str::FromStr;
use std::sync::Arc;
use anyhow::{Context as _, Result, anyhow};
@@ -14,11 +13,12 @@ use bedrock::bedrock_client::types::{
ReasoningContentBlockDelta, StopReason,
};
use bedrock::{
- BedrockAnyToolChoice, BedrockAutoToolChoice, BedrockBlob, BedrockError, BedrockInnerContent,
- BedrockMessage, BedrockModelMode, BedrockStreamingResponse, BedrockThinkingBlock,
- BedrockThinkingTextBlock, BedrockTool, BedrockToolChoice, BedrockToolConfig,
- BedrockToolInputSchema, BedrockToolResultBlock, BedrockToolResultContentBlock,
- BedrockToolResultStatus, BedrockToolSpec, BedrockToolUseBlock, Model, value_to_aws_document,
+ BedrockAnyToolChoice, BedrockAutoToolChoice, BedrockBlob, BedrockError, BedrockImageBlock,
+ BedrockImageFormat, BedrockImageSource, BedrockInnerContent, BedrockMessage, BedrockModelMode,
+ BedrockStreamingResponse, BedrockThinkingBlock, BedrockThinkingTextBlock, BedrockTool,
+ BedrockToolChoice, BedrockToolConfig, BedrockToolInputSchema, BedrockToolResultBlock,
+ BedrockToolResultContentBlock, BedrockToolResultStatus, BedrockToolSpec, BedrockToolUseBlock,
+ Model, value_to_aws_document,
};
use collections::{BTreeMap, HashMap};
use credentials_provider::CredentialsProvider;
@@ -48,6 +48,7 @@ use ui_input::InputField;
use util::ResultExt;
use crate::AllLanguageModelSettings;
+use crate::provider::util::parse_tool_arguments;
actions!(bedrock, [Tab, TabPrev]);
@@ -111,6 +112,7 @@ pub struct AmazonBedrockSettings {
pub role_arn: Option<String>,
pub authentication_method: Option<BedrockAuthMethod>,
pub allow_global: Option<bool>,
+ pub allow_extended_context: Option<bool>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, EnumIter, IntoStaticStr, JsonSchema)]
@@ -382,6 +384,13 @@ impl State {
.and_then(|s| s.allow_global)
.unwrap_or(false)
}
+
+ fn get_allow_extended_context(&self) -> bool {
+ self.settings
+ .as_ref()
+ .and_then(|s| s.allow_extended_context)
+ .unwrap_or(false)
+ }
}
pub struct BedrockLanguageModelProvider {
@@ -628,7 +637,7 @@ impl LanguageModel for BedrockModel {
}
fn supports_images(&self) -> bool {
- false
+ self.model.supports_images()
}
fn supports_tool_choice(&self, choice: LanguageModelToolChoice) -> bool {
@@ -672,9 +681,14 @@ impl LanguageModel for BedrockModel {
LanguageModelCompletionError,
>,
> {
- let (region, allow_global) = cx.read_entity(&self.state, |state, _cx| {
- (state.get_region(), state.get_allow_global())
- });
+ let (region, allow_global, allow_extended_context) =
+ cx.read_entity(&self.state, |state, _cx| {
+ (
+ state.get_region(),
+ state.get_allow_global(),
+ state.get_allow_extended_context(),
+ )
+ });
let model_id = match self.model.cross_region_inference_id(®ion, allow_global) {
Ok(s) => s,
@@ -685,6 +699,8 @@ impl LanguageModel for BedrockModel {
let deny_tool_calls = request.tool_choice == Some(LanguageModelToolChoice::None);
+ let use_extended_context = allow_extended_context && self.model.supports_extended_context();
+
let request = match into_bedrock(
request,
model_id,
@@ -692,6 +708,7 @@ impl LanguageModel for BedrockModel {
self.model.max_output_tokens(),
self.model.mode(),
self.model.supports_caching(),
+ use_extended_context,
) {
Ok(request) => request,
Err(err) => return futures::future::ready(Err(err.into())).boxed(),
@@ -747,6 +764,7 @@ pub fn into_bedrock(
max_output_tokens: u64,
mode: BedrockModelMode,
supports_caching: bool,
+ allow_extended_context: bool,
) -> Result<bedrock::Request> {
let mut new_messages: Vec<BedrockMessage> = Vec::new();
let mut system_message = String::new();
@@ -818,7 +836,7 @@ pub fn into_bedrock(
.context("failed to build Bedrock tool use block")
.log_err()
.map(BedrockInnerContent::ToolUse)
- },
+ }
MessageContent::ToolResult(tool_result) => {
BedrockToolResultBlock::builder()
.tool_use_id(tool_result.tool_use_id.to_string())
@@ -826,11 +844,42 @@ pub fn into_bedrock(
LanguageModelToolResultContent::Text(text) => {
BedrockToolResultContentBlock::Text(text.to_string())
}
- LanguageModelToolResultContent::Image(_) => {
- BedrockToolResultContentBlock::Text(
- // TODO: Bedrock image support
- "[Tool responded with an image, but Zed doesn't support these in Bedrock models yet]".to_string()
- )
+ LanguageModelToolResultContent::Image(image) => {
+ use base64::Engine;
+
+ match base64::engine::general_purpose::STANDARD
+ .decode(image.source.as_bytes())
+ {
+ Ok(image_bytes) => {
+ match BedrockImageBlock::builder()
+ .format(BedrockImageFormat::Png)
+ .source(BedrockImageSource::Bytes(
+ BedrockBlob::new(image_bytes),
+ ))
+ .build()
+ {
+ Ok(image_block) => {
+ BedrockToolResultContentBlock::Image(
+ image_block,
+ )
+ }
+ Err(err) => {
+ BedrockToolResultContentBlock::Text(
+ format!(
+ "[Failed to build image block: {}]",
+ err
+ ),
+ )
+ }
+ }
+ }
+ Err(err) => {
+ BedrockToolResultContentBlock::Text(format!(
+ "[Failed to decode tool result image: {}]",
+ err
+ ))
+ }
+ }
}
})
.status({
@@ -845,7 +894,22 @@ pub fn into_bedrock(
.log_err()
.map(BedrockInnerContent::ToolResult)
}
- _ => None,
+ MessageContent::Image(image) => {
+ use base64::Engine;
+
+ let image_bytes = base64::engine::general_purpose::STANDARD
+ .decode(image.source.as_bytes())
+ .context("failed to decode base64 image data")
+ .log_err()?;
+
+ BedrockImageBlock::builder()
+ .format(BedrockImageFormat::Png)
+ .source(BedrockImageSource::Bytes(BedrockBlob::new(image_bytes)))
+ .build()
+ .context("failed to build Bedrock image block")
+ .log_err()
+ .map(BedrockInnerContent::Image)
+ }
})
.collect();
if message.cache && supports_caching {
@@ -955,6 +1019,7 @@ pub fn into_bedrock(
temperature: request.temperature.or(Some(default_temperature)),
top_k: None,
top_p: None,
+ allow_extended_context,
})
}
@@ -1099,12 +1164,8 @@ pub fn map_to_language_model_completion_events(
.tool_uses_by_index
.remove(&cb_stop.content_block_index)
.map(|tool_use| {
- let input = if tool_use.input_json.is_empty() {
- Value::Null
- } else {
- serde_json::Value::from_str(&tool_use.input_json)
- .unwrap_or(Value::Null)
- };
+ let input = parse_tool_arguments(&tool_use.input_json)
+ .unwrap_or_else(|_| Value::Object(Default::default()));
Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
@@ -30,6 +30,8 @@ use settings::SettingsStore;
use ui::prelude::*;
use util::debug_panic;
+use crate::provider::util::parse_tool_arguments;
+
const PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId::new("copilot_chat");
const PROVIDER_NAME: LanguageModelProviderName =
LanguageModelProviderName::new("GitHub Copilot Chat");
@@ -493,17 +495,9 @@ pub fn map_to_language_model_completion_events(
}
events.extend(state.tool_calls_by_index.drain().map(
- |(_, tool_call)| {
- // The model can output an empty string
- // to indicate the absence of arguments.
- // When that happens, create an empty
- // object instead.
- let arguments = if tool_call.arguments.is_empty() {
- Ok(serde_json::Value::Object(Default::default()))
- } else {
- serde_json::Value::from_str(&tool_call.arguments)
- };
- match arguments {
+ |(_, tool_call)| match parse_tool_arguments(
+ &tool_call.arguments,
+ ) {
Ok(input) => Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_call.id.into(),
@@ -522,7 +516,6 @@ pub fn map_to_language_model_completion_events(
json_parse_error: error.to_string(),
},
),
- }
},
));
@@ -607,7 +600,7 @@ impl CopilotResponsesEventMapper {
..
} => {
let mut events = Vec::new();
- match serde_json::from_str::<serde_json::Value>(&arguments) {
+ match parse_tool_arguments(&arguments) {
Ok(input) => events.push(Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: call_id.into(),
@@ -16,13 +16,14 @@ use language_model::{
pub use settings::DeepseekAvailableModel as AvailableModel;
use settings::{Settings, SettingsStore};
use std::pin::Pin;
-use std::str::FromStr;
use std::sync::{Arc, LazyLock};
use ui::{ButtonLink, ConfiguredApiCard, List, ListBulletItem, prelude::*};
use ui_input::InputField;
use util::ResultExt;
+use crate::provider::util::parse_tool_arguments;
+
const PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId::new("deepseek");
const PROVIDER_NAME: LanguageModelProviderName = LanguageModelProviderName::new("DeepSeek");
@@ -486,7 +487,7 @@ impl DeepSeekEventMapper {
}
Some("tool_calls") => {
events.extend(self.tool_calls_by_index.drain().map(|(_, tool_call)| {
- match serde_json::Value::from_str(&tool_call.arguments) {
+ match parse_tool_arguments(&tool_call.arguments) {
Ok(input) => Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_call.id.clone().into(),
@@ -18,12 +18,12 @@ use lmstudio::{ModelType, get_models};
pub use settings::LmStudioAvailableModel as AvailableModel;
use settings::{Settings, SettingsStore};
use std::pin::Pin;
-use std::str::FromStr;
use std::{collections::BTreeMap, sync::Arc};
use ui::{ButtonLike, Indicator, List, ListBulletItem, prelude::*};
use util::ResultExt;
use crate::AllLanguageModelSettings;
+use crate::provider::util::parse_tool_arguments;
const LMSTUDIO_DOWNLOAD_URL: &str = "https://lmstudio.ai/download";
const LMSTUDIO_CATALOG_URL: &str = "https://lmstudio.ai/models";
@@ -558,7 +558,7 @@ impl LmStudioEventMapper {
}
Some("tool_calls") => {
events.extend(self.tool_calls_by_index.drain().map(|(_, tool_call)| {
- match serde_json::Value::from_str(&tool_call.arguments) {
+ match parse_tool_arguments(&tool_call.arguments) {
Ok(input) => Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_call.id.into(),
@@ -16,13 +16,14 @@ pub use settings::MistralAvailableModel as AvailableModel;
use settings::{Settings, SettingsStore};
use std::collections::HashMap;
use std::pin::Pin;
-use std::str::FromStr;
use std::sync::{Arc, LazyLock};
use strum::IntoEnumIterator;
use ui::{ButtonLink, ConfiguredApiCard, List, ListBulletItem, prelude::*};
use ui_input::InputField;
use util::ResultExt;
+use crate::provider::util::parse_tool_arguments;
+
const PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId::new("mistral");
const PROVIDER_NAME: LanguageModelProviderName = LanguageModelProviderName::new("Mistral");
@@ -404,6 +405,9 @@ pub fn into_mistral(
Role::Assistant => {
for content in &message.content {
match content {
+ MessageContent::Text(text) if text.is_empty() => {
+ // Mistral API returns a 400 if there's neither content nor tool_calls
+ }
MessageContent::Text(text) => {
messages.push(mistral::RequestMessage::Assistant {
content: Some(mistral::MessageContent::Plain {
@@ -659,7 +663,7 @@ impl MistralEventMapper {
continue;
}
- match serde_json::Value::from_str(&tool_call.arguments) {
+ match parse_tool_arguments(&tool_call.arguments) {
Ok(input) => results.push(Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_call.id.into(),
@@ -852,6 +856,13 @@ mod tests {
cache: false,
reasoning_details: None,
},
+ // should skip empty assistant messages
+ LanguageModelRequestMessage {
+ role: Role::Assistant,
+ content: vec![MessageContent::Text("".into())],
+ cache: false,
+ reasoning_details: None,
+ },
],
temperature: Some(0.5),
tools: vec![],
@@ -45,6 +45,7 @@ pub struct OllamaSettings {
pub api_url: String,
pub auto_discover: bool,
pub available_models: Vec<AvailableModel>,
+ pub context_window: Option<u64>,
}
pub struct OllamaLanguageModelProvider {
@@ -246,14 +247,20 @@ impl LanguageModelProvider for OllamaLanguageModelProvider {
let settings = OllamaLanguageModelProvider::settings(cx);
// Add models from the Ollama API
- if settings.auto_discover {
- for model in self.state.read(cx).fetched_models.iter() {
- models.insert(model.name.clone(), model.clone());
+ for model in self.state.read(cx).fetched_models.iter() {
+ let mut model = model.clone();
+ if let Some(context_window) = settings.context_window {
+ model.max_tokens = context_window;
}
+ models.insert(model.name.clone(), model);
}
// Override with available models from settings
- merge_settings_into_models(&mut models, &settings.available_models);
+ merge_settings_into_models(
+ &mut models,
+ &settings.available_models,
+ settings.context_window,
+ );
let mut models = models
.into_values()
@@ -604,6 +611,7 @@ fn map_to_language_model_completion_events(
struct ConfigurationView {
api_key_editor: Entity<InputField>,
api_url_editor: Entity<InputField>,
+ context_window_editor: Entity<InputField>,
state: Entity<State>,
}
@@ -617,6 +625,14 @@ impl ConfigurationView {
input
});
+ let context_window_editor = cx.new(|cx| {
+ let input = InputField::new(window, cx, "8192").label("Context Window");
+ if let Some(context_window) = OllamaLanguageModelProvider::settings(cx).context_window {
+ input.set_text(&context_window.to_string(), window, cx);
+ }
+ input
+ });
+
cx.observe(&state, |_, _, cx| {
cx.notify();
})
@@ -625,6 +641,7 @@ impl ConfigurationView {
Self {
api_key_editor,
api_url_editor,
+ context_window_editor,
state,
}
}
@@ -712,7 +729,57 @@ impl ConfigurationView {
cx.notify();
}
- fn render_instructions(cx: &mut Context<Self>) -> Div {
+ fn save_context_window(&mut self, cx: &mut Context<Self>) {
+ let context_window_str = self
+ .context_window_editor
+ .read(cx)
+ .text(cx)
+ .trim()
+ .to_string();
+ let current_context_window = OllamaLanguageModelProvider::settings(cx).context_window;
+
+ if let Ok(context_window) = context_window_str.parse::<u64>() {
+ if Some(context_window) != current_context_window {
+ let fs = <dyn Fs>::global(cx);
+ update_settings_file(fs, cx, move |settings, _| {
+ settings
+ .language_models
+ .get_or_insert_default()
+ .ollama
+ .get_or_insert_default()
+ .context_window = Some(context_window);
+ });
+ }
+ } else if context_window_str.is_empty() && current_context_window.is_some() {
+ let fs = <dyn Fs>::global(cx);
+ update_settings_file(fs, cx, move |settings, _| {
+ settings
+ .language_models
+ .get_or_insert_default()
+ .ollama
+ .get_or_insert_default()
+ .context_window = None;
+ });
+ }
+ }
+
+ fn reset_context_window(&mut self, window: &mut Window, cx: &mut Context<Self>) {
+ self.context_window_editor
+ .update(cx, |input, cx| input.set_text("", window, cx));
+ let fs = <dyn Fs>::global(cx);
+ update_settings_file(fs, cx, |settings, _cx| {
+ if let Some(settings) = settings
+ .language_models
+ .as_mut()
+ .and_then(|models| models.ollama.as_mut())
+ {
+ settings.context_window = None;
+ }
+ });
+ cx.notify();
+ }
+
+ fn render_instructions(cx: &App) -> Div {
v_flex()
.gap_2()
.child(Label::new(
@@ -774,6 +841,56 @@ impl ConfigurationView {
}
}
+ fn render_context_window_editor(&self, cx: &Context<Self>) -> Div {
+ let settings = OllamaLanguageModelProvider::settings(cx);
+ let custom_context_window_set = settings.context_window.is_some();
+
+ if custom_context_window_set {
+ h_flex()
+ .p_3()
+ .justify_between()
+ .rounded_md()
+ .border_1()
+ .border_color(cx.theme().colors().border)
+ .bg(cx.theme().colors().elevated_surface_background)
+ .child(
+ h_flex()
+ .gap_2()
+ .child(Icon::new(IconName::Check).color(Color::Success))
+ .child(v_flex().gap_1().child(Label::new(format!(
+ "Context Window: {}",
+ settings.context_window.unwrap()
+ )))),
+ )
+ .child(
+ Button::new("reset-context-window", "Reset")
+ .label_size(LabelSize::Small)
+ .icon(IconName::Undo)
+ .icon_size(IconSize::Small)
+ .icon_position(IconPosition::Start)
+ .layer(ElevationIndex::ModalSurface)
+ .on_click(
+ cx.listener(|this, _, window, cx| {
+ this.reset_context_window(window, cx)
+ }),
+ ),
+ )
+ } else {
+ v_flex()
+ .on_action(
+ cx.listener(|this, _: &menu::Confirm, _window, cx| {
+ this.save_context_window(cx)
+ }),
+ )
+ .child(self.context_window_editor.clone())
+ .child(
+ Label::new("Default: Model specific")
+ .size(LabelSize::Small)
+ .color(Color::Muted),
+ )
+ }
+ }
+
fn render_api_url_editor(&self, cx: &Context<Self>) -> Div {
let api_url = OllamaLanguageModelProvider::api_url(cx);
let custom_api_url_set = api_url != OLLAMA_API_URL;
@@ -823,6 +940,7 @@ impl Render for ConfigurationView {
.gap_2()
.child(Self::render_instructions(cx))
.child(self.render_api_url_editor(cx))
+ .child(self.render_context_window_editor(cx))
.child(self.render_api_key_editor(cx))
.child(
h_flex()
@@ -910,10 +1028,13 @@ impl Render for ConfigurationView {
fn merge_settings_into_models(
models: &mut HashMap<String, ollama::Model>,
available_models: &[AvailableModel],
+ context_window: Option<u64>,
) {
for setting_model in available_models {
if let Some(model) = models.get_mut(&setting_model.name) {
- model.max_tokens = setting_model.max_tokens;
+ if context_window.is_none() {
+ model.max_tokens = setting_model.max_tokens;
+ }
model.display_name = setting_model.display_name.clone();
model.keep_alive = setting_model.keep_alive.clone();
model.supports_tools = setting_model.supports_tools;
@@ -925,7 +1046,7 @@ fn merge_settings_into_models(
ollama::Model {
name: setting_model.name.clone(),
display_name: setting_model.display_name.clone(),
- max_tokens: setting_model.max_tokens,
+ max_tokens: context_window.unwrap_or(setting_model.max_tokens),
keep_alive: setting_model.keep_alive.clone(),
supports_tools: setting_model.supports_tools,
supports_vision: setting_model.supports_images,
@@ -1003,7 +1124,7 @@ mod tests {
},
];
- merge_settings_into_models(&mut models, &available_models);
+ merge_settings_into_models(&mut models, &available_models, None);
let model_1_5b = models
.get("qwen2.5-coder:1.5b")
@@ -28,13 +28,14 @@ use open_ai::{
};
use settings::{OpenAiAvailableModel as AvailableModel, Settings, SettingsStore};
use std::pin::Pin;
-use std::str::FromStr as _;
use std::sync::{Arc, LazyLock};
use strum::IntoEnumIterator;
use ui::{ButtonLink, ConfiguredApiCard, List, ListBulletItem, prelude::*};
use ui_input::InputField;
use util::ResultExt;
+use crate::provider::util::parse_tool_arguments;
+
const PROVIDER_ID: LanguageModelProviderId = language_model::OPEN_AI_PROVIDER_ID;
const PROVIDER_NAME: LanguageModelProviderName = language_model::OPEN_AI_PROVIDER_NAME;
@@ -299,10 +300,7 @@ impl LanguageModel for OpenAiLanguageModel {
fn supports_images(&self) -> bool {
use open_ai::Model;
match &self.model {
- Model::FourOmni
- | Model::FourOmniMini
- | Model::FourPointOne
- | Model::FourPointOneMini
+ Model::FourOmniMini
| Model::FourPointOneNano
| Model::Five
| Model::FiveCodex
@@ -312,8 +310,7 @@ impl LanguageModel for OpenAiLanguageModel {
| Model::FivePointTwo
| Model::FivePointTwoCodex
| Model::O1
- | Model::O3
- | Model::O4Mini => true,
+ | Model::O3 => true,
Model::ThreePointFiveTurbo
| Model::Four
| Model::FourTurbo
@@ -831,7 +828,7 @@ impl OpenAiEventMapper {
}
Some("tool_calls") => {
events.extend(self.tool_calls_by_index.drain().map(|(_, tool_call)| {
- match serde_json::Value::from_str(&tool_call.arguments) {
+ match parse_tool_arguments(&tool_call.arguments) {
Ok(input) => Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_call.id.clone().into(),
@@ -963,7 +960,7 @@ impl OpenAiResponseEventMapper {
}
let raw_input = entry.arguments.clone();
self.pending_stop_reason = Some(StopReason::ToolUse);
- match serde_json::from_str::<serde_json::Value>(&entry.arguments) {
+ match parse_tool_arguments(&entry.arguments) {
Ok(input) => {
vec![Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
@@ -1087,29 +1084,27 @@ impl OpenAiResponseEventMapper {
};
let name: Arc<str> = Arc::from(function_call.name.clone().unwrap_or_default());
let arguments = &function_call.arguments;
- if !arguments.is_empty() {
- self.pending_stop_reason = Some(StopReason::ToolUse);
- match serde_json::from_str::<serde_json::Value>(arguments) {
- Ok(input) => {
- events.push(Ok(LanguageModelCompletionEvent::ToolUse(
- LanguageModelToolUse {
- id: LanguageModelToolUseId::from(call_id.clone()),
- name: name.clone(),
- is_input_complete: true,
- input,
- raw_input: arguments.clone(),
- thought_signature: None,
- },
- )));
- }
- Err(error) => {
- events.push(Ok(LanguageModelCompletionEvent::ToolUseJsonParseError {
+ self.pending_stop_reason = Some(StopReason::ToolUse);
+ match parse_tool_arguments(arguments) {
+ Ok(input) => {
+ events.push(Ok(LanguageModelCompletionEvent::ToolUse(
+ LanguageModelToolUse {
id: LanguageModelToolUseId::from(call_id.clone()),
- tool_name: name.clone(),
- raw_input: Arc::<str>::from(arguments.clone()),
- json_parse_error: error.to_string(),
- }));
- }
+ name: name.clone(),
+ is_input_complete: true,
+ input,
+ raw_input: arguments.clone(),
+ thought_signature: None,
+ },
+ )));
+ }
+ Err(error) => {
+ events.push(Ok(LanguageModelCompletionEvent::ToolUseJsonParseError {
+ id: LanguageModelToolUseId::from(call_id.clone()),
+ tool_name: name.clone(),
+ raw_input: Arc::<str>::from(arguments.clone()),
+ json_parse_error: error.to_string(),
+ }));
}
}
}
@@ -1156,7 +1151,7 @@ pub fn count_open_ai_tokens(
match model {
Model::Custom { max_tokens, .. } => {
let model = if max_tokens >= 100_000 {
- // If the max tokens is 100k or more, it is likely the o200k_base tokenizer from gpt4o
+ // If the max tokens is 100k or more, it likely uses the o200k_base tokenizer
"gpt-4o"
} else {
// Otherwise fallback to gpt-4, since only cl100k_base and o200k_base are
@@ -1172,15 +1167,11 @@ pub fn count_open_ai_tokens(
Model::ThreePointFiveTurbo
| Model::Four
| Model::FourTurbo
- | Model::FourOmni
| Model::FourOmniMini
- | Model::FourPointOne
- | Model::FourPointOneMini
| Model::FourPointOneNano
| Model::O1
| Model::O3
| Model::O3Mini
- | Model::O4Mini
| Model::Five
| Model::FiveCodex
| Model::FiveMini
@@ -1928,4 +1919,49 @@ mod tests {
LanguageModelCompletionEvent::Stop(StopReason::MaxTokens)
));
}
+
+ #[test]
+ fn responses_stream_handles_empty_tool_arguments() {
+ // Test that tools with no arguments (empty string) are handled correctly
+ let events = vec![
+ ResponsesStreamEvent::OutputItemAdded {
+ output_index: 0,
+ sequence_number: None,
+ item: response_item_function_call("item_fn", Some("")),
+ },
+ ResponsesStreamEvent::FunctionCallArgumentsDone {
+ item_id: "item_fn".into(),
+ output_index: 0,
+ arguments: "".into(),
+ sequence_number: None,
+ },
+ ResponsesStreamEvent::Completed {
+ response: ResponseSummary::default(),
+ },
+ ];
+
+ let mapped = map_response_events(events);
+ assert_eq!(mapped.len(), 2);
+
+ // Should produce a ToolUse event with an empty object
+ assert!(matches!(
+ &mapped[0],
+ LanguageModelCompletionEvent::ToolUse(LanguageModelToolUse {
+ id,
+ name,
+ raw_input,
+ input,
+ ..
+ }) if id.to_string() == "call_123"
+ && name.as_ref() == "get_weather"
+ && raw_input == ""
+ && input.is_object()
+ && input.as_object().unwrap().is_empty()
+ ));
+
+ assert!(matches!(
+ mapped[1],
+ LanguageModelCompletionEvent::Stop(StopReason::ToolUse)
+ ));
+ }
}
@@ -16,12 +16,13 @@ use open_router::{
};
use settings::{OpenRouterAvailableModel as AvailableModel, Settings, SettingsStore};
use std::pin::Pin;
-use std::str::FromStr as _;
use std::sync::{Arc, LazyLock};
use ui::{ButtonLink, ConfiguredApiCard, List, ListBulletItem, prelude::*};
use ui_input::InputField;
use util::ResultExt;
+use crate::provider::util::parse_tool_arguments;
+
const PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId::new("openrouter");
const PROVIDER_NAME: LanguageModelProviderName = LanguageModelProviderName::new("OpenRouter");
@@ -189,7 +190,7 @@ impl LanguageModelProvider for OpenRouterLanguageModelProvider {
}
fn default_fast_model(&self, _cx: &App) -> Option<Arc<dyn LanguageModel>> {
- Some(self.create_language_model(open_router::Model::default_fast()))
+ None
}
fn provided_models(&self, cx: &App) -> Vec<Arc<dyn LanguageModel>> {
@@ -657,7 +658,7 @@ impl OpenRouterEventMapper {
}
Some("tool_calls") => {
events.extend(self.tool_calls_by_index.drain().map(|(_, tool_call)| {
- match serde_json::Value::from_str(&tool_call.arguments) {
+ match parse_tool_arguments(&tool_call.arguments) {
Ok(input) => Ok(LanguageModelCompletionEvent::ToolUse(
LanguageModelToolUse {
id: tool_call.id.clone().into(),
@@ -0,0 +1,13 @@
+use std::str::FromStr;
+
+/// Parses tool call arguments JSON, treating empty strings as empty objects.
+///
+/// Many LLM providers return empty strings for tool calls with no arguments.
+/// This helper normalizes that behavior by converting empty strings to `{}`.
+pub fn parse_tool_arguments(arguments: &str) -> Result<serde_json::Value, serde_json::Error> {
+ if arguments.is_empty() {
+ Ok(serde_json::Value::Object(Default::default()))
+ } else {
+ serde_json::Value::from_str(arguments)
+ }
+}
@@ -59,6 +59,7 @@ impl settings::Settings for AllLanguageModelSettings {
role_arn: None, // todo(was never a setting for this...)
authentication_method: bedrock.authentication_method.map(Into::into),
allow_global: bedrock.allow_global,
+ allow_extended_context: bedrock.allow_extended_context,
},
deepseek: DeepSeekSettings {
api_url: deepseek.api_url.unwrap(),
@@ -80,6 +81,7 @@ impl settings::Settings for AllLanguageModelSettings {
api_url: ollama.api_url.unwrap(),
auto_discover: ollama.auto_discover.unwrap_or(true),
available_models: ollama.available_models.unwrap_or_default(),
+ context_window: ollama.context_window,
},
open_router: OpenRouterSettings {
api_url: open_router.api_url.unwrap(),
@@ -125,7 +125,11 @@ impl ProcessMemoryCache {
fn is_descendant_of(&self, pid: Pid, root_pid: Pid, parent_map: &HashMap<Pid, Pid>) -> bool {
let mut current = pid;
+ let mut visited = HashSet::default();
while current != root_pid {
+ if !visited.insert(current) {
+ return false;
+ }
match parent_map.get(¤t) {
Some(&parent) => current = parent,
None => return false,
@@ -11,6 +11,7 @@ pub(super) fn bash_task_context() -> ContextProviderWithTasks {
TaskTemplate {
label: format!("run '{}'", VariableName::File.template_value()),
command: VariableName::File.template_value(),
+ tags: vec!["bash-script".to_owned()],
..TaskTemplate::default()
},
]))
@@ -0,0 +1,5 @@
+; Run bash scripts
+(
+ (program . (_) @run) @_bash-script
+ (#set! tag bash-script)
+)
@@ -35,6 +35,7 @@ urlencoding.workspace = true
util.workspace = true
workspace.workspace = true
zed_actions.workspace = true
+mermaid-rs-renderer.workspace = true
[dev-dependencies]
editor = { workspace = true, features = ["test-support"] }
@@ -14,6 +14,7 @@ pub enum ParsedMarkdownElement {
Table(ParsedMarkdownTable),
BlockQuote(ParsedMarkdownBlockQuote),
CodeBlock(ParsedMarkdownCodeBlock),
+ MermaidDiagram(ParsedMarkdownMermaidDiagram),
/// A paragraph of text and other inline elements.
Paragraph(MarkdownParagraph),
HorizontalRule(Range<usize>),
@@ -28,6 +29,7 @@ impl ParsedMarkdownElement {
Self::Table(table) => table.source_range.clone(),
Self::BlockQuote(block_quote) => block_quote.source_range.clone(),
Self::CodeBlock(code_block) => code_block.source_range.clone(),
+ Self::MermaidDiagram(mermaid) => mermaid.source_range.clone(),
Self::Paragraph(text) => match text.get(0)? {
MarkdownParagraphChunk::Text(t) => t.source_range.clone(),
MarkdownParagraphChunk::Image(image) => image.source_range.clone(),
@@ -86,6 +88,19 @@ pub struct ParsedMarkdownCodeBlock {
pub highlights: Option<Vec<(Range<usize>, HighlightId)>>,
}
+#[derive(Debug)]
+#[cfg_attr(test, derive(PartialEq))]
+pub struct ParsedMarkdownMermaidDiagram {
+ pub source_range: Range<usize>,
+ pub contents: ParsedMarkdownMermaidDiagramContents,
+}
+
+#[derive(Clone, Debug, PartialEq, Eq, Hash)]
+pub struct ParsedMarkdownMermaidDiagramContents {
+ pub contents: SharedString,
+ pub scale: u32,
+}
+
#[derive(Debug)]
#[cfg_attr(test, derive(PartialEq))]
pub struct ParsedMarkdownHeading {
@@ -196,21 +196,29 @@ impl<'a> MarkdownParser<'a> {
Some(vec![ParsedMarkdownElement::BlockQuote(block_quote)])
}
Tag::CodeBlock(kind) => {
- let language = match kind {
- pulldown_cmark::CodeBlockKind::Indented => None,
+ let (language, scale) = match kind {
+ pulldown_cmark::CodeBlockKind::Indented => (None, None),
pulldown_cmark::CodeBlockKind::Fenced(language) => {
if language.is_empty() {
- None
+ (None, None)
} else {
- Some(language.to_string())
+ let parts: Vec<&str> = language.split_whitespace().collect();
+ let lang = parts.first().map(|s| s.to_string());
+ let scale = parts.get(1).and_then(|s| s.parse::<u32>().ok());
+ (lang, scale)
}
}
};
self.cursor += 1;
- let code_block = self.parse_code_block(language).await?;
- Some(vec![ParsedMarkdownElement::CodeBlock(code_block)])
+ if language.as_deref() == Some("mermaid") {
+ let mermaid_diagram = self.parse_mermaid_diagram(scale).await?;
+ Some(vec![ParsedMarkdownElement::MermaidDiagram(mermaid_diagram)])
+ } else {
+ let code_block = self.parse_code_block(language).await?;
+ Some(vec![ParsedMarkdownElement::CodeBlock(code_block)])
+ }
}
Tag::HtmlBlock => {
self.cursor += 1;
@@ -806,6 +814,50 @@ impl<'a> MarkdownParser<'a> {
})
}
+ async fn parse_mermaid_diagram(
+ &mut self,
+ scale: Option<u32>,
+ ) -> Option<ParsedMarkdownMermaidDiagram> {
+ let Some((_event, source_range)) = self.previous() else {
+ return None;
+ };
+
+ let source_range = source_range.clone();
+ let mut code = String::new();
+
+ while !self.eof() {
+ let Some((current, _source_range)) = self.current() else {
+ break;
+ };
+
+ match current {
+ Event::Text(text) => {
+ code.push_str(text);
+ self.cursor += 1;
+ }
+ Event::End(TagEnd::CodeBlock) => {
+ self.cursor += 1;
+ break;
+ }
+ _ => {
+ break;
+ }
+ }
+ }
+
+ code = code.strip_suffix('\n').unwrap_or(&code).to_string();
+
+ let scale = scale.unwrap_or(100).clamp(10, 500);
+
+ Some(ParsedMarkdownMermaidDiagram {
+ source_range,
+ contents: ParsedMarkdownMermaidDiagramContents {
+ contents: code.into(),
+ scale,
+ },
+ })
+ }
+
async fn parse_html_block(&mut self) -> Vec<ParsedMarkdownElement> {
let mut elements = Vec::new();
let Some((_event, _source_range)) = self.previous() else {
@@ -19,7 +19,7 @@ use workspace::item::{Item, ItemHandle};
use workspace::{Pane, Workspace};
use crate::markdown_elements::ParsedMarkdownElement;
-use crate::markdown_renderer::CheckboxClickedEvent;
+use crate::markdown_renderer::{CheckboxClickedEvent, MermaidState};
use crate::{
OpenFollowingPreview, OpenPreview, OpenPreviewToTheSide, ScrollPageDown, ScrollPageUp,
markdown_elements::ParsedMarkdown,
@@ -39,6 +39,7 @@ pub struct MarkdownPreviewView {
selected_block: usize,
list_state: ListState,
language_registry: Arc<LanguageRegistry>,
+ mermaid_state: MermaidState,
parsing_markdown_task: Option<Task<Result<()>>>,
mode: MarkdownPreviewMode,
}
@@ -214,6 +215,7 @@ impl MarkdownPreviewView {
contents: None,
list_state,
language_registry,
+ mermaid_state: Default::default(),
parsing_markdown_task: None,
image_cache: RetainAllImageCache::new(cx),
mode,
@@ -345,7 +347,9 @@ impl MarkdownPreviewView {
parse_markdown(&contents, file_location, Some(language_registry)).await
});
let contents = parsing_task.await;
+
view.update(cx, move |view, cx| {
+ view.mermaid_state.update(&contents, cx);
let markdown_blocks_count = contents.children.len();
view.contents = Some(contents);
let scroll_top = view.list_state.logical_scroll_top();
@@ -571,39 +575,35 @@ impl Render for MarkdownPreviewView {
return div().into_any();
};
- let mut render_cx =
- RenderContext::new(Some(this.workspace.clone()), window, cx)
- .with_checkbox_clicked_callback(cx.listener(
- move |this, e: &CheckboxClickedEvent, window, cx| {
- if let Some(editor) = this
- .active_editor
- .as_ref()
- .map(|s| s.editor.clone())
- {
- editor.update(cx, |editor, cx| {
- let task_marker =
- if e.checked() { "[x]" } else { "[ ]" };
-
- editor.edit(
- [(
- MultiBufferOffset(
- e.source_range().start,
- )
- ..MultiBufferOffset(
- e.source_range().end,
- ),
- task_marker,
- )],
- cx,
- );
- });
- this.parse_markdown_from_active_editor(
- false, window, cx,
- );
- cx.notify();
- }
- },
- ));
+ let mut render_cx = RenderContext::new(
+ Some(this.workspace.clone()),
+ &this.mermaid_state,
+ window,
+ cx,
+ )
+ .with_checkbox_clicked_callback(cx.listener(
+ move |this, e: &CheckboxClickedEvent, window, cx| {
+ if let Some(editor) =
+ this.active_editor.as_ref().map(|s| s.editor.clone())
+ {
+ editor.update(cx, |editor, cx| {
+ let task_marker =
+ if e.checked() { "[x]" } else { "[ ]" };
+
+ editor.edit(
+ [(
+ MultiBufferOffset(e.source_range().start)
+ ..MultiBufferOffset(e.source_range().end),
+ task_marker,
+ )],
+ cx,
+ );
+ });
+ this.parse_markdown_from_active_editor(false, window, cx);
+ cx.notify();
+ }
+ },
+ ));
let block = contents.children.get(ix).unwrap();
let rendered_block = render_markdown_block(block, &mut render_cx);
@@ -613,6 +613,8 @@ impl Render for MarkdownPreviewView {
contents.children.get(ix + 1),
);
+ let selected_block = this.selected_block;
+ let scaled_rems = render_cx.scaled_rems(1.0);
div()
.id(ix)
.when(should_apply_padding, |this| {
@@ -643,11 +645,11 @@ impl Render for MarkdownPreviewView {
let indicator = div()
.h_full()
.w(px(4.0))
- .when(ix == this.selected_block, |this| {
+ .when(ix == selected_block, |this| {
this.bg(cx.theme().colors().border)
})
.group_hover("markdown-block", |s| {
- if ix == this.selected_block {
+ if ix == selected_block {
s
} else {
s.bg(cx.theme().colors().border_variant)
@@ -658,11 +660,7 @@ impl Render for MarkdownPreviewView {
container.child(
div()
.relative()
- .child(
- div()
- .pl(render_cx.scaled_rems(1.0))
- .child(rendered_block),
- )
+ .child(div().pl(scaled_rems).child(rendered_block))
.child(indicator.absolute().left_0().top_0()),
)
})
@@ -1,20 +1,26 @@
-use crate::markdown_elements::{
- HeadingLevel, Image, Link, MarkdownParagraph, MarkdownParagraphChunk, ParsedMarkdown,
- ParsedMarkdownBlockQuote, ParsedMarkdownCodeBlock, ParsedMarkdownElement,
- ParsedMarkdownHeading, ParsedMarkdownListItem, ParsedMarkdownListItemType, ParsedMarkdownTable,
- ParsedMarkdownTableAlignment, ParsedMarkdownTableRow,
+use crate::{
+ markdown_elements::{
+ HeadingLevel, Image, Link, MarkdownParagraph, MarkdownParagraphChunk, ParsedMarkdown,
+ ParsedMarkdownBlockQuote, ParsedMarkdownCodeBlock, ParsedMarkdownElement,
+ ParsedMarkdownHeading, ParsedMarkdownListItem, ParsedMarkdownListItemType,
+ ParsedMarkdownMermaidDiagram, ParsedMarkdownMermaidDiagramContents, ParsedMarkdownTable,
+ ParsedMarkdownTableAlignment, ParsedMarkdownTableRow,
+ },
+ markdown_preview_view::MarkdownPreviewView,
};
+use collections::HashMap;
use fs::normalize_path;
use gpui::{
- AbsoluteLength, AnyElement, App, AppContext as _, Context, Div, Element, ElementId, Entity,
- HighlightStyle, Hsla, ImageSource, InteractiveText, IntoElement, Keystroke, Modifiers,
- ParentElement, Render, Resource, SharedString, Styled, StyledText, TextStyle, WeakEntity,
- Window, div, img, rems,
+ AbsoluteLength, Animation, AnimationExt, AnyElement, App, AppContext as _, Context, Div,
+ Element, ElementId, Entity, HighlightStyle, Hsla, ImageSource, InteractiveText, IntoElement,
+ Keystroke, Modifiers, ParentElement, Render, RenderImage, Resource, SharedString, Styled,
+ StyledText, Task, TextStyle, WeakEntity, Window, div, img, pulsating_between, rems,
};
use settings::Settings;
use std::{
ops::{Mul, Range},
- sync::Arc,
+ sync::{Arc, OnceLock},
+ time::Duration,
vec,
};
use theme::{ActiveTheme, SyntaxTheme, ThemeSettings};
@@ -38,8 +44,134 @@ impl CheckboxClickedEvent {
type CheckboxClickedCallback = Arc<Box<dyn Fn(&CheckboxClickedEvent, &mut Window, &mut App)>>;
+type MermaidDiagramCache = HashMap<ParsedMarkdownMermaidDiagramContents, CachedMermaidDiagram>;
+
+#[derive(Default)]
+pub(crate) struct MermaidState {
+ cache: MermaidDiagramCache,
+ order: Vec<ParsedMarkdownMermaidDiagramContents>,
+}
+
+impl MermaidState {
+ fn get_fallback_image(
+ idx: usize,
+ old_order: &[ParsedMarkdownMermaidDiagramContents],
+ new_order_len: usize,
+ cache: &MermaidDiagramCache,
+ ) -> Option<Arc<RenderImage>> {
+ // When the diagram count changes e.g. addition or removal, positional matching
+ // is unreliable since a new diagram at index i likely doesn't correspond to the
+ // old diagram at index i. We only allow fallbacks when counts match, which covers
+ // the common case of editing a diagram in-place.
+ //
+ // Swapping two diagrams would briefly show the stale fallback, but that's an edge
+ // case we don't handle.
+ if old_order.len() != new_order_len {
+ return None;
+ }
+ old_order.get(idx).and_then(|old_content| {
+ cache.get(old_content).and_then(|old_cached| {
+ old_cached
+ .render_image
+ .get()
+ .and_then(|result| result.as_ref().ok().cloned())
+ // Chain fallbacks for rapid edits.
+ .or_else(|| old_cached.fallback_image.clone())
+ })
+ })
+ }
+
+ pub(crate) fn update(
+ &mut self,
+ parsed: &ParsedMarkdown,
+ cx: &mut Context<MarkdownPreviewView>,
+ ) {
+ use crate::markdown_elements::ParsedMarkdownElement;
+ use std::collections::HashSet;
+
+ let mut new_order = Vec::new();
+ for element in parsed.children.iter() {
+ if let ParsedMarkdownElement::MermaidDiagram(mermaid_diagram) = element {
+ new_order.push(mermaid_diagram.contents.clone());
+ }
+ }
+
+ for (idx, new_content) in new_order.iter().enumerate() {
+ if !self.cache.contains_key(new_content) {
+ let fallback =
+ Self::get_fallback_image(idx, &self.order, new_order.len(), &self.cache);
+ self.cache.insert(
+ new_content.clone(),
+ CachedMermaidDiagram::new(new_content.clone(), fallback, cx),
+ );
+ }
+ }
+
+ let new_order_set: HashSet<_> = new_order.iter().cloned().collect();
+ self.cache
+ .retain(|content, _| new_order_set.contains(content));
+ self.order = new_order;
+ }
+}
+
+pub(crate) struct CachedMermaidDiagram {
+ pub(crate) render_image: Arc<OnceLock<anyhow::Result<Arc<RenderImage>>>>,
+ pub(crate) fallback_image: Option<Arc<RenderImage>>,
+ _task: Task<()>,
+}
+
+impl CachedMermaidDiagram {
+ pub(crate) fn new(
+ contents: ParsedMarkdownMermaidDiagramContents,
+ fallback_image: Option<Arc<RenderImage>>,
+ cx: &mut Context<MarkdownPreviewView>,
+ ) -> Self {
+ let result = Arc::new(OnceLock::<anyhow::Result<Arc<RenderImage>>>::new());
+ let result_clone = result.clone();
+ let svg_renderer = cx.svg_renderer();
+
+ let _task = cx.spawn(async move |this, cx| {
+ let value = cx
+ .background_spawn(async move {
+ let svg_string = mermaid_rs_renderer::render(&contents.contents)?;
+ let scale = contents.scale as f32 / 100.0;
+ svg_renderer
+ .render_single_frame(svg_string.as_bytes(), scale, true)
+ .map_err(|e| anyhow::anyhow!("{}", e))
+ })
+ .await;
+ let _ = result_clone.set(value);
+ this.update(cx, |_, cx| {
+ cx.notify();
+ })
+ .ok();
+ });
+
+ Self {
+ render_image: result,
+ fallback_image,
+ _task,
+ }
+ }
+
+ #[cfg(test)]
+ fn new_for_test(
+ render_image: Option<Arc<RenderImage>>,
+ fallback_image: Option<Arc<RenderImage>>,
+ ) -> Self {
+ let result = Arc::new(OnceLock::new());
+ if let Some(img) = render_image {
+ let _ = result.set(Ok(img));
+ }
+ Self {
+ render_image: result,
+ fallback_image,
+ _task: Task::ready(()),
+ }
+ }
+}
#[derive(Clone)]
-pub struct RenderContext {
+pub struct RenderContext<'a> {
workspace: Option<WeakEntity<Workspace>>,
next_id: usize,
buffer_font_family: SharedString,
@@ -58,14 +190,16 @@ pub struct RenderContext {
indent: usize,
checkbox_clicked_callback: Option<CheckboxClickedCallback>,
is_last_child: bool,
+ mermaid_state: &'a MermaidState,
}
-impl RenderContext {
- pub fn new(
+impl<'a> RenderContext<'a> {
+ pub(crate) fn new(
workspace: Option<WeakEntity<Workspace>>,
+ mermaid_state: &'a MermaidState,
window: &mut Window,
cx: &mut App,
- ) -> RenderContext {
+ ) -> Self {
let theme = cx.theme().clone();
let settings = ThemeSettings::get_global(cx);
@@ -95,6 +229,7 @@ impl RenderContext {
code_span_background_color: theme.colors().editor_document_highlight_read_background,
checkbox_clicked_callback: None,
is_last_child: false,
+ mermaid_state,
}
}
@@ -163,7 +298,8 @@ pub fn render_parsed_markdown(
window: &mut Window,
cx: &mut App,
) -> Div {
- let mut cx = RenderContext::new(workspace, window, cx);
+ let cache = Default::default();
+ let mut cx = RenderContext::new(workspace, &cache, window, cx);
v_flex().gap_3().children(
parsed
@@ -181,6 +317,7 @@ pub fn render_markdown_block(block: &ParsedMarkdownElement, cx: &mut RenderConte
Table(table) => render_markdown_table(table, cx),
BlockQuote(block_quote) => render_markdown_block_quote(block_quote, cx),
CodeBlock(code_block) => render_markdown_code_block(code_block, cx),
+ MermaidDiagram(mermaid) => render_mermaid_diagram(mermaid, cx),
HorizontalRule(_) => render_markdown_rule(cx),
Image(image) => render_markdown_image(image, cx),
}
@@ -320,7 +457,7 @@ struct MarkdownCheckbox {
style: ui::ToggleStyle,
tooltip: Option<Box<dyn Fn(&mut Window, &mut App) -> gpui::AnyView>>,
label: Option<SharedString>,
- render_cx: RenderContext,
+ base_rem: Rems,
}
impl MarkdownCheckbox {
@@ -336,7 +473,7 @@ impl MarkdownCheckbox {
tooltip: None,
label: None,
placeholder: false,
- render_cx,
+ base_rem: render_cx.scaled_rems(1.0),
}
}
@@ -379,7 +516,7 @@ impl gpui::RenderOnce for MarkdownCheckbox {
} else {
Color::Selected
};
- let icon_size_small = IconSize::Custom(self.render_cx.scaled_rems(14. / 16.)); // was IconSize::Small
+ let icon_size_small = IconSize::Custom(self.base_rem.mul(14. / 16.)); // was IconSize::Small
let icon = match self.toggle_state {
ToggleState::Selected => {
if self.placeholder {
@@ -404,7 +541,7 @@ impl gpui::RenderOnce for MarkdownCheckbox {
let border_color = self.border_color(cx);
let hover_border_color = border_color.alpha(0.7);
- let size = self.render_cx.scaled_rems(1.25); // was Self::container_size(); (20px)
+ let size = self.base_rem.mul(1.25); // was Self::container_size(); (20px)
let checkbox = h_flex()
.id(self.id.clone())
@@ -418,9 +555,9 @@ impl gpui::RenderOnce for MarkdownCheckbox {
.flex_none()
.justify_center()
.items_center()
- .m(self.render_cx.scaled_rems(0.25)) // was .m_1
- .size(self.render_cx.scaled_rems(1.0)) // was .size_4
- .rounded(self.render_cx.scaled_rems(0.125)) // was .rounded_xs
+ .m(self.base_rem.mul(0.25)) // was .m_1
+ .size(self.base_rem.mul(1.0)) // was .size_4
+ .rounded(self.base_rem.mul(0.125)) // was .rounded_xs
.border_1()
.bg(bg_color)
.border_color(border_color)
@@ -437,7 +574,7 @@ impl gpui::RenderOnce for MarkdownCheckbox {
.flex_none()
.rounded_full()
.bg(color.color(cx).alpha(0.5))
- .size(self.render_cx.scaled_rems(0.25)), // was .size_1
+ .size(self.base_rem.mul(0.25)), // was .size_1
)
})
.children(icon),
@@ -651,6 +788,89 @@ fn render_markdown_code_block(
.into_any()
}
+fn render_mermaid_diagram(
+ parsed: &ParsedMarkdownMermaidDiagram,
+ cx: &mut RenderContext,
+) -> AnyElement {
+ let cached = cx.mermaid_state.cache.get(&parsed.contents);
+
+ if let Some(result) = cached.and_then(|c| c.render_image.get()) {
+ match result {
+ Ok(render_image) => cx
+ .with_common_p(div())
+ .px_3()
+ .py_3()
+ .bg(cx.code_block_background_color)
+ .rounded_sm()
+ .child(
+ div().w_full().child(
+ img(ImageSource::Render(render_image.clone()))
+ .max_w_full()
+ .with_fallback(|| {
+ div()
+ .child(Label::new("Failed to load mermaid diagram"))
+ .into_any_element()
+ }),
+ ),
+ )
+ .into_any(),
+ Err(_) => cx
+ .with_common_p(div())
+ .px_3()
+ .py_3()
+ .bg(cx.code_block_background_color)
+ .rounded_sm()
+ .child(StyledText::new(parsed.contents.contents.clone()))
+ .into_any(),
+ }
+ } else if let Some(fallback) = cached.and_then(|c| c.fallback_image.as_ref()) {
+ cx.with_common_p(div())
+ .px_3()
+ .py_3()
+ .bg(cx.code_block_background_color)
+ .rounded_sm()
+ .child(
+ div()
+ .w_full()
+ .child(
+ img(ImageSource::Render(fallback.clone()))
+ .max_w_full()
+ .with_fallback(|| {
+ div()
+ .child(Label::new("Failed to load mermaid diagram"))
+ .into_any_element()
+ }),
+ )
+ .with_animation(
+ "mermaid-fallback-pulse",
+ Animation::new(Duration::from_secs(2))
+ .repeat()
+ .with_easing(pulsating_between(0.6, 1.0)),
+ |el, delta| el.opacity(delta),
+ ),
+ )
+ .into_any()
+ } else {
+ cx.with_common_p(div())
+ .px_3()
+ .py_3()
+ .bg(cx.code_block_background_color)
+ .rounded_sm()
+ .child(
+ Label::new("Rendering mermaid diagram...")
+ .color(Color::Muted)
+ .with_animation(
+ "mermaid-loading-pulse",
+ Animation::new(Duration::from_secs(2))
+ .repeat()
+ .with_easing(pulsating_between(0.4, 0.8)),
+ |label, delta| label.alpha(delta),
+ ),
+ )
+ .into_any()
+ }
+}
+
fn render_markdown_paragraph(parsed: &MarkdownParagraph, cx: &mut RenderContext) -> AnyElement {
cx.with_common_p(div())
.children(render_markdown_text(parsed, cx))
@@ -917,6 +1137,7 @@ fn list_item_prefix(order: usize, ordered: bool, depth: usize) -> String {
#[cfg(test)]
mod tests {
use super::*;
+ use crate::markdown_elements::ParsedMarkdownMermaidDiagramContents;
use crate::markdown_elements::ParsedMarkdownTableColumn;
use crate::markdown_elements::ParsedMarkdownText;
@@ -1074,4 +1295,204 @@ mod tests {
assert_eq!(list_item_prefix(1, false, 3), "‣ ");
assert_eq!(list_item_prefix(1, false, 4), "⁃ ");
}
+
+ fn mermaid_contents(s: &str) -> ParsedMarkdownMermaidDiagramContents {
+ ParsedMarkdownMermaidDiagramContents {
+ contents: SharedString::from(s.to_string()),
+ scale: 1,
+ }
+ }
+
+ fn mermaid_sequence(diagrams: &[&str]) -> Vec<ParsedMarkdownMermaidDiagramContents> {
+ diagrams
+ .iter()
+ .map(|diagram| mermaid_contents(diagram))
+ .collect()
+ }
+
+ fn mermaid_fallback(
+ new_diagram: &str,
+ new_full_order: &[ParsedMarkdownMermaidDiagramContents],
+ old_full_order: &[ParsedMarkdownMermaidDiagramContents],
+ cache: &MermaidDiagramCache,
+ ) -> Option<Arc<RenderImage>> {
+ let new_content = mermaid_contents(new_diagram);
+ let idx = new_full_order
+ .iter()
+ .position(|content| content == &new_content)?;
+ MermaidState::get_fallback_image(idx, old_full_order, new_full_order.len(), cache)
+ }
+
+ fn mock_render_image() -> Arc<RenderImage> {
+ Arc::new(RenderImage::new(Vec::new()))
+ }
+
+ #[test]
+ fn test_mermaid_fallback_on_edit() {
+ let old_full_order = mermaid_sequence(&["graph A", "graph B", "graph C"]);
+ let new_full_order = mermaid_sequence(&["graph A", "graph B modified", "graph C"]);
+
+ let svg_b = mock_render_image();
+ let mut cache: MermaidDiagramCache = HashMap::default();
+ cache.insert(
+ mermaid_contents("graph A"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+ cache.insert(
+ mermaid_contents("graph B"),
+ CachedMermaidDiagram::new_for_test(Some(svg_b.clone()), None),
+ );
+ cache.insert(
+ mermaid_contents("graph C"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+
+ let fallback =
+ mermaid_fallback("graph B modified", &new_full_order, &old_full_order, &cache);
+
+ assert!(
+ fallback.is_some(),
+ "Should use old diagram as fallback when editing"
+ );
+ assert!(
+ Arc::ptr_eq(&fallback.unwrap(), &svg_b),
+ "Fallback should be the old diagram's SVG"
+ );
+ }
+
+ #[test]
+ fn test_mermaid_no_fallback_on_add_in_middle() {
+ let old_full_order = mermaid_sequence(&["graph A", "graph C"]);
+ let new_full_order = mermaid_sequence(&["graph A", "graph NEW", "graph C"]);
+
+ let mut cache: MermaidDiagramCache = HashMap::default();
+ cache.insert(
+ mermaid_contents("graph A"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+ cache.insert(
+ mermaid_contents("graph C"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+
+ let fallback = mermaid_fallback("graph NEW", &new_full_order, &old_full_order, &cache);
+
+ assert!(
+ fallback.is_none(),
+ "Should NOT use fallback when adding new diagram"
+ );
+ }
+
+ #[test]
+ fn test_mermaid_fallback_chains_on_rapid_edits() {
+ let old_full_order = mermaid_sequence(&["graph A", "graph B modified", "graph C"]);
+ let new_full_order = mermaid_sequence(&["graph A", "graph B modified again", "graph C"]);
+
+ let original_svg = mock_render_image();
+ let mut cache: MermaidDiagramCache = HashMap::default();
+ cache.insert(
+ mermaid_contents("graph A"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+ cache.insert(
+ mermaid_contents("graph B modified"),
+ // Still rendering, but has fallback from original "graph B"
+ CachedMermaidDiagram::new_for_test(None, Some(original_svg.clone())),
+ );
+ cache.insert(
+ mermaid_contents("graph C"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+
+ let fallback = mermaid_fallback(
+ "graph B modified again",
+ &new_full_order,
+ &old_full_order,
+ &cache,
+ );
+
+ assert!(
+ fallback.is_some(),
+ "Should chain fallback when previous render not complete"
+ );
+ assert!(
+ Arc::ptr_eq(&fallback.unwrap(), &original_svg),
+ "Fallback should chain through to the original SVG"
+ );
+ }
+
+ #[test]
+ fn test_mermaid_no_fallback_when_no_old_diagram_at_index() {
+ let old_full_order = mermaid_sequence(&["graph A"]);
+ let new_full_order = mermaid_sequence(&["graph A", "graph B"]);
+
+ let mut cache: MermaidDiagramCache = HashMap::default();
+ cache.insert(
+ mermaid_contents("graph A"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+
+ let fallback = mermaid_fallback("graph B", &new_full_order, &old_full_order, &cache);
+
+ assert!(
+ fallback.is_none(),
+ "Should NOT have fallback when adding diagram at end"
+ );
+ }
+
+ #[test]
+ fn test_mermaid_fallback_with_duplicate_blocks_edit_first() {
+ let old_full_order = mermaid_sequence(&["graph A", "graph A", "graph B"]);
+ let new_full_order = mermaid_sequence(&["graph A edited", "graph A", "graph B"]);
+
+ let svg_a = mock_render_image();
+ let mut cache: MermaidDiagramCache = HashMap::default();
+ cache.insert(
+ mermaid_contents("graph A"),
+ CachedMermaidDiagram::new_for_test(Some(svg_a.clone()), None),
+ );
+ cache.insert(
+ mermaid_contents("graph B"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+
+ let fallback = mermaid_fallback("graph A edited", &new_full_order, &old_full_order, &cache);
+
+ assert!(
+ fallback.is_some(),
+ "Should use old diagram as fallback when editing one of duplicate blocks"
+ );
+ assert!(
+ Arc::ptr_eq(&fallback.unwrap(), &svg_a),
+ "Fallback should be the old duplicate diagram's image"
+ );
+ }
+
+ #[test]
+ fn test_mermaid_fallback_with_duplicate_blocks_edit_second() {
+ let old_full_order = mermaid_sequence(&["graph A", "graph A", "graph B"]);
+ let new_full_order = mermaid_sequence(&["graph A", "graph A edited", "graph B"]);
+
+ let svg_a = mock_render_image();
+ let mut cache: MermaidDiagramCache = HashMap::default();
+ cache.insert(
+ mermaid_contents("graph A"),
+ CachedMermaidDiagram::new_for_test(Some(svg_a.clone()), None),
+ );
+ cache.insert(
+ mermaid_contents("graph B"),
+ CachedMermaidDiagram::new_for_test(Some(mock_render_image()), None),
+ );
+
+ let fallback = mermaid_fallback("graph A edited", &new_full_order, &old_full_order, &cache);
+
+ assert!(
+ fallback.is_some(),
+ "Should use old diagram as fallback when editing the second duplicate block"
+ );
+ assert!(
+ Arc::ptr_eq(&fallback.unwrap(), &svg_a),
+ "Fallback should be the old duplicate diagram's image"
+ );
+ }
}
@@ -20,27 +20,9 @@ pub struct Model {
pub supports_thinking: Option<bool>,
}
-fn get_max_tokens(name: &str) -> u64 {
- /// Default context length for unknown models.
+fn get_max_tokens(_name: &str) -> u64 {
const DEFAULT_TOKENS: u64 = 4096;
- /// Magic number. Lets many Ollama models work with ~16GB of ram.
- /// Models that support context beyond 16k such as codestral (32k) or devstral (128k) will be clamped down to 16k
- const MAXIMUM_TOKENS: u64 = 16384;
-
- match name.split(':').next().unwrap() {
- "granite-code" | "phi" | "tinyllama" => 2048,
- "llama2" | "stablelm2" | "vicuna" | "yi" => 4096,
- "aya" | "codegemma" | "gemma" | "gemma2" | "llama3" | "starcoder" => 8192,
- "codellama" | "starcoder2" => 16384,
- "codestral" | "dolphin-mixtral" | "llava" | "magistral" | "mistral" | "mixstral"
- | "qwen2" | "qwen2.5-coder" => 32768,
- "cogito" | "command-r" | "deepseek-coder-v2" | "deepseek-r1" | "deepseek-v3"
- | "devstral" | "gemma3" | "gpt-oss" | "granite3.3" | "llama3.1" | "llama3.2"
- | "llama3.3" | "mistral-nemo" | "phi3" | "phi3.5" | "phi4" | "qwen3" | "yi-coder" => 128000,
- "qwen3-coder" => 256000,
- _ => DEFAULT_TOKENS,
- }
- .clamp(1, MAXIMUM_TOKENS)
+ DEFAULT_TOKENS
}
impl Model {
@@ -63,15 +63,8 @@ pub enum Model {
Four,
#[serde(rename = "gpt-4-turbo")]
FourTurbo,
- #[serde(rename = "gpt-4o")]
- #[default]
- FourOmni,
#[serde(rename = "gpt-4o-mini")]
FourOmniMini,
- #[serde(rename = "gpt-4.1")]
- FourPointOne,
- #[serde(rename = "gpt-4.1-mini")]
- FourPointOneMini,
#[serde(rename = "gpt-4.1-nano")]
FourPointOneNano,
#[serde(rename = "o1")]
@@ -80,13 +73,12 @@ pub enum Model {
O3Mini,
#[serde(rename = "o3")]
O3,
- #[serde(rename = "o4-mini")]
- O4Mini,
#[serde(rename = "gpt-5")]
Five,
#[serde(rename = "gpt-5-codex")]
FiveCodex,
#[serde(rename = "gpt-5-mini")]
+ #[default]
FiveMini,
#[serde(rename = "gpt-5-nano")]
FiveNano,
@@ -116,8 +108,7 @@ const fn default_supports_chat_completions() -> bool {
impl Model {
pub fn default_fast() -> Self {
- // TODO: Replace with FiveMini since all other models are deprecated
- Self::FourPointOneMini
+ Self::FiveMini
}
pub fn from_id(id: &str) -> Result<Self> {
@@ -125,15 +116,11 @@ impl Model {
"gpt-3.5-turbo" => Ok(Self::ThreePointFiveTurbo),
"gpt-4" => Ok(Self::Four),
"gpt-4-turbo-preview" => Ok(Self::FourTurbo),
- "gpt-4o" => Ok(Self::FourOmni),
"gpt-4o-mini" => Ok(Self::FourOmniMini),
- "gpt-4.1" => Ok(Self::FourPointOne),
- "gpt-4.1-mini" => Ok(Self::FourPointOneMini),
"gpt-4.1-nano" => Ok(Self::FourPointOneNano),
"o1" => Ok(Self::O1),
"o3-mini" => Ok(Self::O3Mini),
"o3" => Ok(Self::O3),
- "o4-mini" => Ok(Self::O4Mini),
"gpt-5" => Ok(Self::Five),
"gpt-5-codex" => Ok(Self::FiveCodex),
"gpt-5-mini" => Ok(Self::FiveMini),
@@ -150,15 +137,11 @@ impl Model {
Self::ThreePointFiveTurbo => "gpt-3.5-turbo",
Self::Four => "gpt-4",
Self::FourTurbo => "gpt-4-turbo",
- Self::FourOmni => "gpt-4o",
Self::FourOmniMini => "gpt-4o-mini",
- Self::FourPointOne => "gpt-4.1",
- Self::FourPointOneMini => "gpt-4.1-mini",
Self::FourPointOneNano => "gpt-4.1-nano",
Self::O1 => "o1",
Self::O3Mini => "o3-mini",
Self::O3 => "o3",
- Self::O4Mini => "o4-mini",
Self::Five => "gpt-5",
Self::FiveCodex => "gpt-5-codex",
Self::FiveMini => "gpt-5-mini",
@@ -175,15 +158,11 @@ impl Model {
Self::ThreePointFiveTurbo => "gpt-3.5-turbo",
Self::Four => "gpt-4",
Self::FourTurbo => "gpt-4-turbo",
- Self::FourOmni => "gpt-4o",
Self::FourOmniMini => "gpt-4o-mini",
- Self::FourPointOne => "gpt-4.1",
- Self::FourPointOneMini => "gpt-4.1-mini",
Self::FourPointOneNano => "gpt-4.1-nano",
Self::O1 => "o1",
Self::O3Mini => "o3-mini",
Self::O3 => "o3",
- Self::O4Mini => "o4-mini",
Self::Five => "gpt-5",
Self::FiveCodex => "gpt-5-codex",
Self::FiveMini => "gpt-5-mini",
@@ -191,9 +170,7 @@ impl Model {
Self::FivePointOne => "gpt-5.1",
Self::FivePointTwo => "gpt-5.2",
Self::FivePointTwoCodex => "gpt-5.2-codex",
- Self::Custom {
- name, display_name, ..
- } => display_name.as_ref().unwrap_or(name),
+ Self::Custom { display_name, .. } => display_name.as_deref().unwrap_or(&self.id()),
}
}
@@ -202,15 +179,11 @@ impl Model {
Self::ThreePointFiveTurbo => 16_385,
Self::Four => 8_192,
Self::FourTurbo => 128_000,
- Self::FourOmni => 128_000,
Self::FourOmniMini => 128_000,
- Self::FourPointOne => 1_047_576,
- Self::FourPointOneMini => 1_047_576,
Self::FourPointOneNano => 1_047_576,
Self::O1 => 200_000,
Self::O3Mini => 200_000,
Self::O3 => 200_000,
- Self::O4Mini => 200_000,
Self::Five => 272_000,
Self::FiveCodex => 272_000,
Self::FiveMini => 272_000,
@@ -230,15 +203,11 @@ impl Model {
Self::ThreePointFiveTurbo => Some(4_096),
Self::Four => Some(8_192),
Self::FourTurbo => Some(4_096),
- Self::FourOmni => Some(16_384),
Self::FourOmniMini => Some(16_384),
- Self::FourPointOne => Some(32_768),
- Self::FourPointOneMini => Some(32_768),
Self::FourPointOneNano => Some(32_768),
Self::O1 => Some(100_000),
Self::O3Mini => Some(100_000),
Self::O3 => Some(100_000),
- Self::O4Mini => Some(100_000),
Self::Five => Some(128_000),
Self::FiveCodex => Some(128_000),
Self::FiveMini => Some(128_000),
@@ -277,10 +246,7 @@ impl Model {
Self::ThreePointFiveTurbo
| Self::Four
| Self::FourTurbo
- | Self::FourOmni
| Self::FourOmniMini
- | Self::FourPointOne
- | Self::FourPointOneMini
| Self::FourPointOneNano
| Self::Five
| Self::FiveCodex
@@ -289,7 +255,7 @@ impl Model {
| Self::FivePointTwo
| Self::FivePointTwoCodex
| Self::FiveNano => true,
- Self::O1 | Self::O3 | Self::O3Mini | Self::O4Mini | Model::Custom { .. } => false,
+ Self::O1 | Self::O3 | Self::O3Mini | Model::Custom { .. } => false,
}
}
@@ -82,7 +82,7 @@ pub struct Model {
}
impl Model {
- pub fn default_fast() -> Self {
+ pub fn default() -> Self {
Self::new(
"openrouter/auto",
Some("Auto Router"),
@@ -94,10 +94,6 @@ impl Model {
)
}
- pub fn default() -> Self {
- Self::default_fast()
- }
-
pub fn new(
name: &str,
display_name: Option<&str>,
@@ -75,6 +75,7 @@ impl LspStore {
.symbols
.values()
.flatten()
+ .unique()
.cloned()
.sorted_by(|a, b| a.range.start.cmp(&b.range.start, &snapshot))
.collect(),
@@ -156,6 +157,7 @@ impl LspStore {
.symbols
.values()
.flatten()
+ .unique()
.cloned()
.sorted_by(|a, b| a.range.start.cmp(&b.range.start, &snapshot))
.collect()
@@ -181,7 +181,7 @@ impl Project {
let to_run = format_to_run();
let arg = format!("{activation_script}{separator} {to_run}");
- let args = shell_kind.args_for_shell(false, arg);
+ let args = shell_kind.args_for_shell(true, arg);
let shell = remote_client
.read(cx)
.shell()
@@ -214,7 +214,7 @@ impl Project {
let to_run = format_to_run();
let arg = format!("{activation_script}{separator} {to_run}");
- let args = shell_kind.args_for_shell(false, arg);
+ let args = shell_kind.args_for_shell(true, arg);
(
Shell::WithArguments {
@@ -34,6 +34,7 @@ pub use remote_connections::RemoteSettings;
pub use remote_servers::RemoteServerProjects;
use settings::{Settings, WorktreeId};
+use dev_container::{DevContainerContext, find_devcontainer_configs};
use ui::{
ContextMenu, Divider, KeyBinding, ListItem, ListItemSpacing, ListSubHeader, PopoverMenu,
PopoverMenuHandle, TintColor, Tooltip, prelude::*,
@@ -352,9 +353,20 @@ pub fn init(cx: &mut App) {
}
let fs = workspace.project().read(cx).fs().clone();
+ let configs = find_devcontainer_configs(workspace, cx);
+ let app_state = workspace.app_state().clone();
+ let dev_container_context = DevContainerContext::from_workspace(workspace, cx);
let handle = cx.entity().downgrade();
workspace.toggle_modal(window, cx, |window, cx| {
- RemoteServerProjects::new_dev_container(fs, window, handle, cx)
+ RemoteServerProjects::new_dev_container(
+ fs,
+ configs,
+ app_state,
+ dev_container_context,
+ window,
+ handle,
+ cx,
+ )
});
});
});
@@ -1621,6 +1633,121 @@ mod tests {
.unwrap()
}
+ #[gpui::test]
+ async fn test_open_dev_container_action_with_single_config(cx: &mut TestAppContext) {
+ let app_state = init_test(cx);
+
+ app_state
+ .fs
+ .as_fake()
+ .insert_tree(
+ path!("/project"),
+ json!({
+ ".devcontainer": {
+ "devcontainer.json": "{}"
+ },
+ "src": {
+ "main.rs": "fn main() {}"
+ }
+ }),
+ )
+ .await;
+
+ cx.update(|cx| {
+ open_paths(
+ &[PathBuf::from(path!("/project"))],
+ app_state,
+ workspace::OpenOptions::default(),
+ cx,
+ )
+ })
+ .await
+ .unwrap();
+
+ assert_eq!(cx.update(|cx| cx.windows().len()), 1);
+ let multi_workspace = cx.update(|cx| cx.windows()[0].downcast::<MultiWorkspace>().unwrap());
+
+ cx.run_until_parked();
+
+ // This dispatch triggers with_active_or_new_workspace -> MultiWorkspace::update
+ // -> Workspace::update -> toggle_modal -> new_dev_container.
+ // Before the fix, this panicked with "cannot read workspace::Workspace while
+ // it is already being updated" because new_dev_container and open_dev_container
+ // tried to read the Workspace entity through a WeakEntity handle while it was
+ // already leased by the outer update.
+ cx.dispatch_action(*multi_workspace, OpenDevContainer);
+
+ multi_workspace
+ .update(cx, |multi_workspace, _, cx| {
+ let modal = multi_workspace
+ .workspace()
+ .read(cx)
+ .active_modal::<RemoteServerProjects>(cx);
+ assert!(
+ modal.is_some(),
+ "Dev container modal should be open after dispatching OpenDevContainer"
+ );
+ })
+ .unwrap();
+ }
+
+ #[gpui::test]
+ async fn test_open_dev_container_action_with_multiple_configs(cx: &mut TestAppContext) {
+ let app_state = init_test(cx);
+
+ app_state
+ .fs
+ .as_fake()
+ .insert_tree(
+ path!("/project"),
+ json!({
+ ".devcontainer": {
+ "rust": {
+ "devcontainer.json": "{}"
+ },
+ "python": {
+ "devcontainer.json": "{}"
+ }
+ },
+ "src": {
+ "main.rs": "fn main() {}"
+ }
+ }),
+ )
+ .await;
+
+ cx.update(|cx| {
+ open_paths(
+ &[PathBuf::from(path!("/project"))],
+ app_state,
+ workspace::OpenOptions::default(),
+ cx,
+ )
+ })
+ .await
+ .unwrap();
+
+ assert_eq!(cx.update(|cx| cx.windows().len()), 1);
+ let multi_workspace = cx.update(|cx| cx.windows()[0].downcast::<MultiWorkspace>().unwrap());
+
+ cx.run_until_parked();
+
+ cx.dispatch_action(*multi_workspace, OpenDevContainer);
+
+ multi_workspace
+ .update(cx, |multi_workspace, _, cx| {
+ let modal = multi_workspace
+ .workspace()
+ .read(cx)
+ .active_modal::<RemoteServerProjects>(cx);
+ assert!(
+ modal.is_some(),
+ "Dev container modal should be open after dispatching OpenDevContainer with multiple configs"
+ );
+ })
+ .unwrap();
+ }
+
fn init_test(cx: &mut TestAppContext) -> Arc<AppState> {
cx.update(|cx| {
let state = AppState::test(cx);
@@ -53,7 +53,7 @@ use util::{
rel_path::RelPath,
};
use workspace::{
- ModalView, MultiWorkspace, OpenLog, OpenOptions, Toast, Workspace,
+ AppState, ModalView, MultiWorkspace, OpenLog, OpenOptions, Toast, Workspace,
notifications::{DetachAndPromptErr, NotificationId},
open_remote_project_with_existing_connection,
};
@@ -258,9 +258,20 @@ impl PickerDelegate for DevContainerPickerDelegate {
.update(cx, move |modal, cx| {
if secondary {
modal.edit_in_dev_container_json(selected_config.clone(), window, cx);
- } else {
- modal.open_dev_container(selected_config, window, cx);
+ } else if let Some((app_state, context)) = modal
+ .workspace
+ .read_with(cx, |workspace, cx| {
+ let app_state = workspace.app_state().clone();
+ let context = DevContainerContext::from_workspace(workspace, cx)?;
+ Some((app_state, context))
+ })
+ .ok()
+ .flatten()
+ {
+ modal.open_dev_container(selected_config, app_state, context, window, cx);
modal.view_in_progress_dev_container(window, cx);
+ } else {
+ log::error!("No active project directory for Dev Container");
}
})
.ok();
@@ -807,14 +818,13 @@ impl RemoteServerProjects {
/// Used when suggesting dev container connection from toast notification.
pub fn new_dev_container(
fs: Arc<dyn Fs>,
+ configs: Vec<DevContainerConfig>,
+ app_state: Arc<AppState>,
+ dev_container_context: Option<DevContainerContext>,
window: &mut Window,
workspace: WeakEntity<Workspace>,
cx: &mut Context<Self>,
) -> Self {
- let configs = workspace
- .read_with(cx, |workspace, cx| find_devcontainer_configs(workspace, cx))
- .unwrap_or_default();
-
let initial_mode = if configs.len() > 1 {
DevContainerCreationProgress::SelectingConfig
} else {
@@ -834,10 +844,12 @@ impl RemoteServerProjects {
let delegate = DevContainerPickerDelegate::new(configs, cx.weak_entity());
this.dev_container_picker =
Some(cx.new(|cx| Picker::uniform_list(delegate, window, cx).modal(false)));
- } else {
+ } else if let Some(context) = dev_container_context {
let config = configs.into_iter().next();
- this.open_dev_container(config, window, cx);
+ this.open_dev_container(config, app_state, context, window, cx);
this.view_in_progress_dev_container(window, cx);
+ } else {
+ log::error!("No active project directory for Dev Container");
}
this
@@ -1809,33 +1821,32 @@ impl RemoteServerProjects {
CreateRemoteDevContainer::new(DevContainerCreationProgress::SelectingConfig, cx);
self.mode = Mode::CreateRemoteDevContainer(state);
cx.notify();
- } else {
+ } else if let Some((app_state, context)) = self
+ .workspace
+ .read_with(cx, |workspace, cx| {
+ let app_state = workspace.app_state().clone();
+ let context = DevContainerContext::from_workspace(workspace, cx)?;
+ Some((app_state, context))
+ })
+ .ok()
+ .flatten()
+ {
let config = configs.into_iter().next();
- self.open_dev_container(config, window, cx);
+ self.open_dev_container(config, app_state, context, window, cx);
self.view_in_progress_dev_container(window, cx);
+ } else {
+ log::error!("No active project directory for Dev Container");
}
}
fn open_dev_container(
&self,
config: Option<DevContainerConfig>,
+ app_state: Arc<AppState>,
+ context: DevContainerContext,
window: &mut Window,
cx: &mut Context<Self>,
) {
- let Some((app_state, context)) = self
- .workspace
- .read_with(cx, |workspace, cx| {
- let app_state = workspace.app_state().clone();
- let context = DevContainerContext::from_workspace(workspace, cx)?;
- Some((app_state, context))
- })
- .log_err()
- .flatten()
- else {
- log::error!("No active project directory for Dev Container");
- return;
- };
-
let replace_window = window.window_handle().downcast::<MultiWorkspace>();
cx.spawn_in(window, async move |entity, cx| {
@@ -230,6 +230,7 @@ pub fn python_env_kernel_specifications(
pub trait RunningKernel: Send + Debug {
fn request_tx(&self) -> mpsc::Sender<JupyterMessage>;
+ fn stdin_tx(&self) -> mpsc::Sender<JupyterMessage>;
fn working_directory(&self) -> &PathBuf;
fn execution_state(&self) -> &ExecutionState;
fn set_execution_state(&mut self, state: ExecutionState);
@@ -90,6 +90,7 @@ pub struct NativeRunningKernel {
_process_status_task: Option<Task<()>>,
pub working_directory: PathBuf,
pub request_tx: mpsc::Sender<JupyterMessage>,
+ pub stdin_tx: mpsc::Sender<JupyterMessage>,
pub execution_state: ExecutionState,
pub kernel_info: Option<KernelInfoReply>,
}
@@ -154,22 +155,39 @@ impl NativeRunningKernel {
let iopub_socket =
runtimelib::create_client_iopub_connection(&connection_info, "", &session_id)
.await?;
- let shell_socket =
- runtimelib::create_client_shell_connection(&connection_info, &session_id).await?;
let control_socket =
runtimelib::create_client_control_connection(&connection_info, &session_id).await?;
+ let peer_identity = runtimelib::peer_identity_for_session(&session_id)?;
+ let shell_socket =
+ runtimelib::create_client_shell_connection_with_identity(
+ &connection_info,
+ &session_id,
+ peer_identity.clone(),
+ )
+ .await?;
+ let stdin_socket = runtimelib::create_client_stdin_connection_with_identity(
+ &connection_info,
+ &session_id,
+ peer_identity,
+ )
+ .await?;
+
let (mut shell_send, shell_recv) = shell_socket.split();
let (mut control_send, control_recv) = control_socket.split();
+ let (mut stdin_send, stdin_recv) = stdin_socket.split();
let (request_tx, mut request_rx) =
futures::channel::mpsc::channel::<JupyterMessage>(100);
+ let (stdin_tx, mut stdin_rx) =
+ futures::channel::mpsc::channel::<JupyterMessage>(100);
let recv_task = cx.spawn({
let session = session.clone();
let mut iopub = iopub_socket;
let mut shell = shell_recv;
let mut control = control_recv;
+ let mut stdin = stdin_recv;
async move |cx| -> anyhow::Result<()> {
loop {
@@ -177,6 +195,7 @@ impl NativeRunningKernel {
msg = iopub.read().fuse() => ("iopub", msg),
msg = shell.read().fuse() => ("shell", msg),
msg = control.read().fuse() => ("control", msg),
+ msg = stdin.read().fuse() => ("stdin", msg),
};
match result {
Ok(message) => {
@@ -252,6 +271,15 @@ impl NativeRunningKernel {
}
});
+ let stdin_routing_task = cx.background_spawn({
+ async move {
+ while let Some(message) = stdin_rx.next().await {
+ stdin_send.send(message).await?;
+ }
+ anyhow::Ok(())
+ }
+ });
+
let stderr = process.stderr.take();
let stdout = process.stdout.take();
@@ -294,6 +322,7 @@ impl NativeRunningKernel {
let mut tasks = FuturesUnordered::new();
tasks.push(with_name("recv task", recv_task));
tasks.push(with_name("routing task", routing_task));
+ tasks.push(with_name("stdin routing task", stdin_routing_task));
while let Some((name, result)) = tasks.next().await {
if let Err(err) = result {
@@ -341,6 +370,7 @@ impl NativeRunningKernel {
anyhow::Ok(Box::new(Self {
process,
request_tx,
+ stdin_tx,
working_directory,
_process_status_task: Some(process_status_task),
connection_path,
@@ -356,6 +386,10 @@ impl RunningKernel for NativeRunningKernel {
self.request_tx.clone()
}
+ fn stdin_tx(&self) -> mpsc::Sender<JupyterMessage> {
+ self.stdin_tx.clone()
+ }
+
fn working_directory(&self) -> &PathBuf {
&self.working_directory
}
@@ -384,6 +418,7 @@ impl RunningKernel for NativeRunningKernel {
fn kill(&mut self) {
self._process_status_task.take();
self.request_tx.close_channel();
+ self.stdin_tx.close_channel();
self.process.kill().ok();
}
}
@@ -119,6 +119,7 @@ pub struct RemoteRunningKernel {
http_client: Arc<dyn HttpClient>,
pub working_directory: std::path::PathBuf,
pub request_tx: mpsc::Sender<JupyterMessage>,
+ pub stdin_tx: mpsc::Sender<JupyterMessage>,
pub execution_state: ExecutionState,
pub kernel_info: Option<KernelInfoReply>,
pub kernel_id: String,
@@ -211,12 +212,15 @@ impl RemoteRunningKernel {
}
});
+ let stdin_tx = request_tx.clone();
+
anyhow::Ok(Box::new(Self {
_routing_task: routing_task,
_receiving_task: receiving_task,
remote_server,
working_directory,
request_tx,
+ stdin_tx,
// todo(kyle): pull this from the kernel API to start with
execution_state: ExecutionState::Idle,
kernel_info: None,
@@ -245,6 +249,10 @@ impl RunningKernel for RemoteRunningKernel {
self.request_tx.clone()
}
+ fn stdin_tx(&self) -> futures::channel::mpsc::Sender<runtimelib::JupyterMessage> {
+ self.stdin_tx.clone()
+ }
+
fn working_directory(&self) -> &std::path::PathBuf {
&self.working_directory
}
@@ -292,5 +300,6 @@ impl RunningKernel for RemoteRunningKernel {
fn kill(&mut self) {
self.request_tx.close_channel();
+ self.stdin_tx.close_channel();
}
}
@@ -36,7 +36,8 @@
use editor::{Editor, MultiBuffer};
use gpui::{AnyElement, ClipboardItem, Entity, EventEmitter, Render, WeakEntity};
use language::Buffer;
-use runtimelib::{ExecutionState, JupyterMessageContent, MimeBundle, MimeType};
+use menu;
+use runtimelib::{ExecutionState, JupyterMessage, JupyterMessageContent, MimeBundle, MimeType};
use ui::{CommonAnimationExt, CopyButton, IconButton, Tooltip, prelude::*};
mod image;
@@ -441,6 +442,18 @@ pub enum ExecutionStatus {
pub struct ExecutionViewFinishedEmpty;
pub struct ExecutionViewFinishedSmall(pub String);
+pub struct InputReplyEvent {
+ pub value: String,
+ pub parent_message: JupyterMessage,
+}
+
+struct PendingInput {
+ prompt: String,
+ password: bool,
+ editor: Entity<Editor>,
+ parent_message: JupyterMessage,
+}
+
/// An ExecutionView shows the outputs of an execution.
/// It can hold zero or more outputs, which the user
/// sees as "the output" for a single execution.
@@ -449,10 +462,12 @@ pub struct ExecutionView {
workspace: WeakEntity<Workspace>,
pub outputs: Vec<Output>,
pub status: ExecutionStatus,
+ pending_input: Option<PendingInput>,
}
impl EventEmitter<ExecutionViewFinishedEmpty> for ExecutionView {}
impl EventEmitter<ExecutionViewFinishedSmall> for ExecutionView {}
+impl EventEmitter<InputReplyEvent> for ExecutionView {}
impl ExecutionView {
pub fn new(
@@ -464,6 +479,56 @@ impl ExecutionView {
workspace,
outputs: Default::default(),
status,
+ pending_input: None,
+ }
+ }
+
+ fn submit_input(&mut self, _window: &mut Window, cx: &mut Context<Self>) {
+ if let Some(pending_input) = self.pending_input.take() {
+ let value = pending_input.editor.read(cx).text(cx);
+
+ let display_text = if pending_input.password {
+ format!("{}{}", pending_input.prompt, "*".repeat(value.len()))
+ } else {
+ format!("{}{}", pending_input.prompt, value)
+ };
+ self.outputs.push(Output::Message(display_text));
+
+ cx.emit(InputReplyEvent {
+ value,
+ parent_message: pending_input.parent_message,
+ });
+ cx.notify();
+ }
+ }
+
+ /// Handle an InputRequest message, storing the full message for replying
+ pub fn handle_input_request(
+ &mut self,
+ message: &JupyterMessage,
+ window: &mut Window,
+ cx: &mut Context<Self>,
+ ) {
+ if let JupyterMessageContent::InputRequest(input_request) = &message.content {
+ let prompt = input_request.prompt.clone();
+ let password = input_request.password;
+
+ let editor = cx.new(|cx| {
+ let mut editor = Editor::single_line(window, cx);
+ editor.set_placeholder_text("Type here and press Enter", window, cx);
+ if password {
+ editor.set_masked(true, cx);
+ }
+ editor
+ });
+
+ self.pending_input = Some(PendingInput {
+ prompt,
+ password,
+ editor,
+ parent_message: message.clone(),
+ });
+ cx.notify();
}
}
@@ -525,6 +590,10 @@ impl ExecutionView {
// Create a marker to clear the output after we get in a new output
Output::ClearOutputWaitMarker
}
+ JupyterMessageContent::InputRequest(_) => {
+ // InputRequest is handled by handle_input_request which needs the full message
+ return;
+ }
JupyterMessageContent::Status(status) => {
match status.execution_state {
ExecutionState::Busy => {
@@ -532,6 +601,7 @@ impl ExecutionView {
}
ExecutionState::Idle => {
self.status = ExecutionStatus::Finished;
+ self.pending_input = None;
if self.outputs.is_empty() {
cx.emit(ExecutionViewFinishedEmpty);
} else if ReplSettings::get_global(cx).inline_output {
@@ -698,7 +768,35 @@ impl Render for ExecutionView {
.into_any_element(),
};
- if self.outputs.is_empty() {
+ let pending_input_element = self.pending_input.as_ref().map(|pending_input| {
+ let prompt_label = if pending_input.prompt.is_empty() {
+ "Input:".to_string()
+ } else {
+ pending_input.prompt.clone()
+ };
+
+ div()
+ .on_action(cx.listener(|this, _: &menu::Confirm, window, cx| {
+ this.submit_input(window, cx);
+ }))
+ .w_full()
+ .child(
+ v_flex()
+ .gap_1()
+ .child(Label::new(prompt_label).color(Color::Muted))
+ .child(
+ div()
+ .px_2()
+ .py_1()
+ .border_1()
+ .border_color(cx.theme().colors().border)
+ .rounded_md()
+ .child(pending_input.editor.clone()),
+ ),
+ )
+ });
+
+ if self.outputs.is_empty() && pending_input_element.is_none() {
return v_flex()
.min_h(window.line_height())
.justify_center()
@@ -713,6 +811,7 @@ impl Render for ExecutionView {
.iter()
.map(|output| output.render(self.workspace.clone(), window, cx)),
)
+ .children(pending_input_element)
.children(match self.status {
ExecutionStatus::Executing => vec![status],
ExecutionStatus::Queued => vec![status],
@@ -727,8 +826,8 @@ mod tests {
use super::*;
use gpui::TestAppContext;
use runtimelib::{
- ClearOutput, ErrorOutput, ExecutionState, JupyterMessageContent, MimeType, Status, Stdio,
- StreamContent,
+ ClearOutput, ErrorOutput, ExecutionState, InputRequest, JupyterMessage,
+ JupyterMessageContent, MimeType, Status, Stdio, StreamContent,
};
use settings::SettingsStore;
use std::path::Path;
@@ -1027,4 +1126,143 @@ mod tests {
"should emit ExecutionViewFinishedEmpty when idle with no outputs"
);
}
+
+ #[gpui::test]
+ async fn test_handle_input_request_creates_pending_input(cx: &mut TestAppContext) {
+ let (mut cx, workspace) = init_test(cx).await;
+ let execution_view = create_execution_view(&mut cx, workspace);
+
+ cx.update(|window, cx| {
+ execution_view.update(cx, |view, cx| {
+ assert!(view.pending_input.is_none());
+
+ let message = JupyterMessage::new(
+ InputRequest {
+ prompt: "Enter name: ".to_string(),
+ password: false,
+ },
+ None,
+ );
+ view.handle_input_request(&message, window, cx);
+ });
+ });
+
+ cx.update(|_, cx| {
+ let view = execution_view.read(cx);
+ assert!(view.pending_input.is_some());
+ let pending = view.pending_input.as_ref().unwrap();
+ assert_eq!(pending.prompt, "Enter name: ");
+ assert!(!pending.password);
+ });
+ }
+
+ #[gpui::test]
+ async fn test_handle_input_request_with_password(cx: &mut TestAppContext) {
+ let (mut cx, workspace) = init_test(cx).await;
+ let execution_view = create_execution_view(&mut cx, workspace);
+
+ cx.update(|window, cx| {
+ execution_view.update(cx, |view, cx| {
+ let message = JupyterMessage::new(
+ InputRequest {
+ prompt: "Password: ".to_string(),
+ password: true,
+ },
+ None,
+ );
+ view.handle_input_request(&message, window, cx);
+ });
+ });
+
+ cx.update(|_, cx| {
+ let view = execution_view.read(cx);
+ assert!(view.pending_input.is_some());
+ let pending = view.pending_input.as_ref().unwrap();
+ assert_eq!(pending.prompt, "Password: ");
+ assert!(pending.password);
+ });
+ }
+
+ #[gpui::test]
+ async fn test_submit_input_emits_reply_event(cx: &mut TestAppContext) {
+ let (mut cx, workspace) = init_test(cx).await;
+ let execution_view = create_execution_view(&mut cx, workspace);
+
+ let received_value = Arc::new(std::sync::Mutex::new(None::<String>));
+ let received_clone = received_value.clone();
+
+ cx.update(|_, cx| {
+ cx.subscribe(&execution_view, move |_, event: &InputReplyEvent, _cx| {
+ *received_clone.lock().unwrap() = Some(event.value.clone());
+ })
+ .detach();
+ });
+
+ cx.update(|window, cx| {
+ execution_view.update(cx, |view, cx| {
+ let message = JupyterMessage::new(
+ InputRequest {
+ prompt: "Name: ".to_string(),
+ password: false,
+ },
+ None,
+ );
+ view.handle_input_request(&message, window, cx);
+
+ // Type into the editor
+ if let Some(ref pending) = view.pending_input {
+ pending.editor.update(cx, |editor, cx| {
+ editor.set_text("test_user", window, cx);
+ });
+ }
+
+ view.submit_input(window, cx);
+ });
+ });
+
+ let value = received_value.lock().unwrap().clone();
+ assert_eq!(value, Some("test_user".to_string()));
+
+ cx.update(|_, cx| {
+ let view = execution_view.read(cx);
+ assert!(
+ view.pending_input.is_none(),
+ "pending_input should be cleared after submit"
+ );
+ });
+ }
+
+ #[gpui::test]
+ async fn test_status_idle_clears_pending_input(cx: &mut TestAppContext) {
+ let (mut cx, workspace) = init_test(cx).await;
+ let execution_view = create_execution_view(&mut cx, workspace);
+
+ cx.update(|window, cx| {
+ execution_view.update(cx, |view, cx| {
+ let message = JupyterMessage::new(
+ InputRequest {
+ prompt: "Input: ".to_string(),
+ password: false,
+ },
+ None,
+ );
+ view.handle_input_request(&message, window, cx);
+ assert!(view.pending_input.is_some());
+
+ // Simulate kernel going idle (e.g., execution interrupted)
+ let idle = JupyterMessageContent::Status(Status {
+ execution_state: ExecutionState::Idle,
+ });
+ view.push_message(&idle, window, cx);
+ });
+ });
+
+ cx.update(|_, cx| {
+ let view = execution_view.read(cx);
+ assert!(
+ view.pending_input.is_none(),
+ "pending_input should be cleared when kernel goes idle"
+ );
+ });
+ }
}
@@ -6,6 +6,7 @@ use crate::{
kernels::{Kernel, KernelSession, KernelSpecification, NativeRunningKernel},
outputs::{
ExecutionStatus, ExecutionView, ExecutionViewFinishedEmpty, ExecutionViewFinishedSmall,
+ InputReplyEvent,
},
repl_settings::ReplSettings,
};
@@ -32,8 +33,8 @@ use gpui::{
use language::Point;
use project::Fs;
use runtimelib::{
- ExecuteRequest, ExecutionState, InterruptRequest, JupyterMessage, JupyterMessageContent,
- ShutdownRequest,
+ ExecuteRequest, ExecutionState, InputReply, InterruptRequest, JupyterMessage,
+ JupyterMessageContent, ReplyStatus, ShutdownRequest,
};
use settings::Settings as _;
use std::{env::temp_dir, ops::Range, sync::Arc, time::Duration};
@@ -129,7 +130,11 @@ impl EditorBlock {
cx: &mut Context<Session>,
) {
self.execution_view.update(cx, |execution_view, cx| {
- execution_view.push_message(&message.content, window, cx);
+ if matches!(&message.content, JupyterMessageContent::InputRequest(_)) {
+ execution_view.handle_input_request(message, window, cx);
+ } else {
+ execution_view.push_message(&message.content, window, cx);
+ }
});
}
@@ -424,6 +429,23 @@ impl Session {
anyhow::Ok(())
}
+ fn send_stdin_reply(
+ &mut self,
+ value: String,
+ parent_message: &JupyterMessage,
+ _cx: &mut Context<Self>,
+ ) {
+ if let Kernel::RunningKernel(kernel) = &mut self.kernel {
+ let reply = InputReply {
+ value,
+ status: ReplyStatus::Ok,
+ error: None,
+ };
+ let message = reply.as_child_of(parent_message);
+ kernel.stdin_tx().try_send(message).log_err();
+ }
+ }
+
fn replace_block_with_inlay(&mut self, message_id: &str, text: &str, cx: &mut Context<Self>) {
let Some(block) = self.blocks.remove(message_id) else {
return;
@@ -511,6 +533,7 @@ impl Session {
let execute_request = ExecuteRequest {
code,
+ allow_stdin: true,
..ExecuteRequest::default()
};
@@ -636,6 +659,14 @@ impl Session {
);
self._subscriptions.push(subscription);
+ let subscription = cx.subscribe(
+ &editor_block.execution_view,
+ |session, _execution_view, event: &InputReplyEvent, cx| {
+ session.send_stdin_reply(event.value.clone(), &event.parent_message, cx);
+ },
+ );
+ self._subscriptions.push(subscription);
+
self.blocks
.insert(message.header.msg_id.clone(), editor_block);
@@ -7,7 +7,7 @@ use std::{borrow::Cow, path::PathBuf};
use crate::ExtendingVec;
-use crate::{DockPosition, DockSide};
+use crate::DockPosition;
#[with_fallible_options]
#[derive(Clone, PartialEq, Serialize, Deserialize, JsonSchema, MergeFrom, Debug, Default)]
@@ -24,10 +24,6 @@ pub struct AgentSettingsContent {
///
/// Default: right
pub dock: Option<DockPosition>,
- /// Where to dock the utility pane (the thread view pane).
- ///
- /// Default: left
- pub agents_panel_dock: Option<DockSide>,
/// Default width in pixels when the agent panel is docked to the left or right.
///
/// Default: 640
@@ -63,6 +63,8 @@ pub struct AmazonBedrockSettingsContent {
pub profile: Option<String>,
pub authentication_method: Option<BedrockAuthMethodContent>,
pub allow_global: Option<bool>,
+ /// Enable the 1M token extended context window beta for supported Anthropic models.
+ pub allow_extended_context: Option<bool>,
}
#[with_fallible_options]
@@ -97,6 +99,7 @@ pub struct OllamaSettingsContent {
pub api_url: Option<String>,
pub auto_discover: Option<bool>,
pub available_models: Option<Vec<OllamaAvailableModel>>,
+ pub context_window: Option<u64>,
}
#[with_fallible_options]
@@ -17,7 +17,7 @@ use std::path::{Path, PathBuf};
use std::sync::Arc;
use theme::ActiveTheme;
use ui::utils::TRAFFIC_LIGHT_PADDING;
-use ui::{Divider, KeyBinding, ListItem, Tab, ThreadItem, Tooltip, prelude::*};
+use ui::{Divider, DividerColor, KeyBinding, ListSubHeader, Tab, ThreadItem, Tooltip, prelude::*};
use ui_input::ErasedEditor;
use util::ResultExt as _;
use workspace::{
@@ -519,18 +519,13 @@ impl PickerDelegate for WorkspacePickerDelegate {
match entry {
SidebarEntry::Separator(title) => Some(
- div()
- .px_0p5()
- .when(index > 0, |this| this.mt_1().child(Divider::horizontal()))
- .child(
- ListItem::new("section_header").selectable(false).child(
- Label::new(title.clone())
- .size(LabelSize::XSmall)
- .color(Color::Muted)
- .when(index > 0, |this| this.mt_1p5())
- .mb_1(),
- ),
- )
+ v_flex()
+ .when(index > 0, |this| {
+ this.mt_1()
+ .gap_2()
+ .child(Divider::horizontal().color(DividerColor::BorderFaded))
+ })
+ .child(ListSubHeader::new(title.clone()).inset(true))
.into_any_element(),
),
SidebarEntry::WorkspaceThread(thread_entry) => {
@@ -13,17 +13,46 @@ struct TaskOptions {
env: HashMap<String, String>,
}
-#[derive(Clone, Debug, Deserialize, PartialEq)]
-#[serde(rename_all = "camelCase")]
+#[derive(Clone, Debug, PartialEq)]
struct VsCodeTaskDefinition {
label: String,
- #[serde(flatten)]
command: Option<Command>,
- #[serde(flatten)]
other_attributes: HashMap<String, serde_json_lenient::Value>,
options: Option<TaskOptions>,
}
+impl<'de> serde::Deserialize<'de> for VsCodeTaskDefinition {
+ fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
+ where
+ D: serde::Deserializer<'de>,
+ {
+ #[derive(Deserialize)]
+ #[serde(rename_all = "camelCase")]
+ struct TaskHelper {
+ #[serde(default)]
+ label: Option<String>,
+ #[serde(flatten)]
+ command: Option<Command>,
+ #[serde(flatten)]
+ other_attributes: HashMap<String, serde_json_lenient::Value>,
+ options: Option<TaskOptions>,
+ }
+
+ let helper = TaskHelper::deserialize(deserializer)?;
+
+ let label = helper
+ .label
+ .unwrap_or_else(|| generate_label(&helper.command));
+
+ Ok(VsCodeTaskDefinition {
+ label,
+ command: helper.command,
+ other_attributes: helper.other_attributes,
+ options: helper.options,
+ })
+ }
+}
+
#[derive(Clone, Deserialize, PartialEq, Debug)]
#[serde(tag = "type")]
#[serde(rename_all = "camelCase")]
@@ -41,6 +70,21 @@ enum Command {
},
}
+fn generate_label(command: &Option<Command>) -> String {
+ match command {
+ Some(Command::Npm { script }) => format!("npm: {}", script),
+ Some(Command::Gulp { task }) => format!("gulp: {}", task),
+ Some(Command::Shell { command, .. }) => {
+ if command.trim().is_empty() {
+ "shell".to_string()
+ } else {
+ command.clone()
+ }
+ }
+ None => "Untitled Task".to_string(),
+ }
+}
+
impl VsCodeTaskDefinition {
fn into_zed_format(
self,
@@ -128,7 +172,7 @@ mod tests {
vscode_format::{Command, VsCodeTaskDefinition},
};
- use super::EnvVariableReplacer;
+ use super::{EnvVariableReplacer, generate_label};
fn compare_without_other_attributes(lhs: VsCodeTaskDefinition, rhs: VsCodeTaskDefinition) {
assert_eq!(
@@ -358,4 +402,62 @@ mod tests {
let tasks: TaskTemplates = vscode_definitions.try_into().unwrap();
assert_eq!(tasks.0, expected);
}
+
+ #[test]
+ fn can_deserialize_tasks_without_labels() {
+ const TASKS_WITHOUT_LABELS: &str = include_str!("../test_data/tasks-without-labels.json");
+ let vscode_definitions: VsCodeTaskFile =
+ serde_json_lenient::from_str(TASKS_WITHOUT_LABELS).unwrap();
+
+ assert_eq!(vscode_definitions.tasks.len(), 4);
+ assert_eq!(vscode_definitions.tasks[0].label, "npm: start");
+ assert_eq!(vscode_definitions.tasks[1].label, "Explicit Label");
+ assert_eq!(vscode_definitions.tasks[2].label, "gulp: build");
+ assert_eq!(vscode_definitions.tasks[3].label, "echo hello");
+ }
+
+ #[test]
+ fn test_generate_label() {
+ assert_eq!(
+ generate_label(&Some(Command::Npm {
+ script: "start".to_string()
+ })),
+ "npm: start"
+ );
+ assert_eq!(
+ generate_label(&Some(Command::Gulp {
+ task: "build".to_string()
+ })),
+ "gulp: build"
+ );
+ assert_eq!(
+ generate_label(&Some(Command::Shell {
+ command: "echo hello".to_string(),
+ args: vec![]
+ })),
+ "echo hello"
+ );
+ assert_eq!(
+ generate_label(&Some(Command::Shell {
+ command: "cargo build --release".to_string(),
+ args: vec![]
+ })),
+ "cargo build --release"
+ );
+ assert_eq!(
+ generate_label(&Some(Command::Shell {
+ command: " ".to_string(),
+ args: vec![]
+ })),
+ "shell"
+ );
+ assert_eq!(
+ generate_label(&Some(Command::Shell {
+ command: "".to_string(),
+ args: vec![]
+ })),
+ "shell"
+ );
+ assert_eq!(generate_label(&None), "Untitled Task");
+ }
}
@@ -0,0 +1,22 @@
+{
+ "version": "2.0.0",
+ "tasks": [
+ {
+ "type": "npm",
+ "script": "start"
+ },
+ {
+ "label": "Explicit Label",
+ "type": "npm",
+ "script": "test"
+ },
+ {
+ "type": "gulp",
+ "task": "build"
+ },
+ {
+ "type": "shell",
+ "command": "echo hello"
+ }
+ ]
+}
@@ -10,7 +10,6 @@ pub struct TabBar {
start_children: SmallVec<[AnyElement; 2]>,
children: SmallVec<[AnyElement; 2]>,
end_children: SmallVec<[AnyElement; 2]>,
- pre_end_children: SmallVec<[AnyElement; 2]>,
scroll_handle: Option<ScrollHandle>,
}
@@ -21,7 +20,6 @@ impl TabBar {
start_children: SmallVec::new(),
children: SmallVec::new(),
end_children: SmallVec::new(),
- pre_end_children: SmallVec::new(),
scroll_handle: None,
}
}
@@ -72,15 +70,6 @@ impl TabBar {
self
}
- pub fn pre_end_child(mut self, end_child: impl IntoElement) -> Self
- where
- Self: Sized,
- {
- self.pre_end_children
- .push(end_child.into_element().into_any());
- self
- }
-
pub fn end_children(mut self, end_children: impl IntoIterator<Item = impl IntoElement>) -> Self
where
Self: Sized,
@@ -148,32 +137,17 @@ impl RenderOnce for TabBar {
.children(self.children),
),
)
- .when(
- !self.end_children.is_empty() || !self.pre_end_children.is_empty(),
- |this| {
- this.child(
- h_flex()
- .flex_none()
- .gap(DynamicSpacing::Base04.rems(cx))
- .px(DynamicSpacing::Base06.rems(cx))
- .children(self.pre_end_children)
- .border_color(cx.theme().colors().border)
- .border_b_1()
- .when(!self.end_children.is_empty(), |div| {
- div.child(
- h_flex()
- .h_full()
- .flex_none()
- .pl(DynamicSpacing::Base04.rems(cx))
- .gap(DynamicSpacing::Base04.rems(cx))
- .border_l_1()
- .border_color(cx.theme().colors().border)
- .children(self.end_children),
- )
- }),
- )
- },
- )
+ .when(!self.end_children.is_empty(), |this| {
+ this.child(
+ h_flex()
+ .flex_none()
+ .gap(DynamicSpacing::Base04.rems(cx))
+ .px(DynamicSpacing::Base06.rems(cx))
+ .border_color(cx.theme().colors().border)
+ .border_b_1()
+ .children(self.end_children),
+ )
+ })
}
}
@@ -27,72 +27,6 @@ pub enum PanelEvent {
pub use proto::PanelId;
-pub struct MinimizePane;
-pub struct ClosePane;
-
-pub trait UtilityPane: EventEmitter<MinimizePane> + EventEmitter<ClosePane> + Render {
- fn position(&self, window: &Window, cx: &App) -> UtilityPanePosition;
- /// The icon to render in the adjacent pane's tab bar for toggling this utility pane
- fn toggle_icon(&self, cx: &App) -> IconName;
- fn expanded(&self, cx: &App) -> bool;
- fn set_expanded(&mut self, expanded: bool, cx: &mut Context<Self>);
- fn width(&self, cx: &App) -> Pixels;
- fn set_width(&mut self, width: Option<Pixels>, cx: &mut Context<Self>);
-}
-
-pub trait UtilityPaneHandle: 'static + Send + Sync {
- fn position(&self, window: &Window, cx: &App) -> UtilityPanePosition;
- fn toggle_icon(&self, cx: &App) -> IconName;
- fn expanded(&self, cx: &App) -> bool;
- fn set_expanded(&self, expanded: bool, cx: &mut App);
- fn width(&self, cx: &App) -> Pixels;
- fn set_width(&self, width: Option<Pixels>, cx: &mut App);
- fn to_any(&self) -> AnyView;
- fn box_clone(&self) -> Box<dyn UtilityPaneHandle>;
-}
-
-impl<T> UtilityPaneHandle for Entity<T>
-where
- T: UtilityPane,
-{
- fn position(&self, window: &Window, cx: &App) -> UtilityPanePosition {
- self.read(cx).position(window, cx)
- }
-
- fn toggle_icon(&self, cx: &App) -> IconName {
- self.read(cx).toggle_icon(cx)
- }
-
- fn expanded(&self, cx: &App) -> bool {
- self.read(cx).expanded(cx)
- }
-
- fn set_expanded(&self, expanded: bool, cx: &mut App) {
- self.update(cx, |this, cx| this.set_expanded(expanded, cx))
- }
-
- fn width(&self, cx: &App) -> Pixels {
- self.read(cx).width(cx)
- }
-
- fn set_width(&self, width: Option<Pixels>, cx: &mut App) {
- self.update(cx, |this, cx| this.set_width(width, cx))
- }
-
- fn to_any(&self) -> AnyView {
- self.clone().into()
- }
-
- fn box_clone(&self) -> Box<dyn UtilityPaneHandle> {
- Box::new(self.clone())
- }
-}
-
-pub enum UtilityPanePosition {
- Left,
- Right,
-}
-
pub trait Panel: Focusable + EventEmitter<PanelEvent> + Render + Sized {
fn persistent_name() -> &'static str;
fn panel_key() -> &'static str;
@@ -302,8 +302,28 @@ impl MultiWorkspace {
}
fn focus_active_workspace(&self, window: &mut Window, cx: &mut App) {
- let pane = self.workspace().read(cx).active_pane().clone();
- let focus_handle = pane.read(cx).focus_handle(cx);
+ // If a dock panel is zoomed, focus it instead of the center pane.
+ // Otherwise, focusing the center pane triggers dismiss_zoomed_items_to_reveal
+ // which closes the zoomed dock.
+ let focus_handle = {
+ let workspace = self.workspace().read(cx);
+ let mut target = None;
+ for dock in workspace.all_docks() {
+ let dock = dock.read(cx);
+ if dock.is_open() {
+ if let Some(panel) = dock.active_panel() {
+ if panel.is_zoomed(window, cx) {
+ target = Some(panel.panel_focus_handle(cx));
+ break;
+ }
+ }
+ }
+ }
+ target.unwrap_or_else(|| {
+ let pane = workspace.active_pane().clone();
+ pane.read(cx).focus_handle(cx)
+ })
+ };
window.focus(&focus_handle, cx);
}
@@ -11,12 +11,10 @@ use crate::{
move_item,
notifications::NotifyResultExt,
toolbar::Toolbar,
- utility_pane::UtilityPaneSlot,
workspace_settings::{AutosaveSetting, TabBarSettings, WorkspaceSettings},
};
use anyhow::Result;
use collections::{BTreeSet, HashMap, HashSet, VecDeque};
-use feature_flags::{AgentV2FeatureFlag, FeatureFlagAppExt};
use futures::{StreamExt, stream::FuturesUnordered};
use gpui::{
Action, AnyElement, App, AsyncWindowContext, ClickEvent, ClipboardItem, Context, Corner, Div,
@@ -425,8 +423,6 @@ pub struct Pane {
welcome_page: Option<Entity<crate::welcome::WelcomePage>>,
pub in_center_group: bool,
- pub is_upper_left: bool,
- pub is_upper_right: bool,
}
pub struct ActivationHistoryEntry {
@@ -595,8 +591,6 @@ impl Pane {
project_item_restoration_data: HashMap::default(),
welcome_page: None,
in_center_group: false,
- is_upper_left: false,
- is_upper_right: false,
}
}
@@ -2208,6 +2202,14 @@ impl Pane {
let path_style = project.read_with(cx, |project, cx| project.path_style(cx));
if save_intent == SaveIntent::Skip {
+ let is_saveable_singleton = cx.update(|_window, cx| {
+ item.can_save(cx) && item.buffer_kind(cx) == ItemBufferKind::Singleton
+ })?;
+ if is_saveable_singleton {
+ pane.update_in(cx, |_, window, cx| item.reload(project, window, cx))?
+ .await
+ .log_err();
+ }
return Ok(true);
};
let Some(item_ix) = pane
@@ -2356,13 +2358,20 @@ impl Pane {
match answer {
Ok(0) => {}
Ok(1) => {
- // Don't save this file
+ // Don't save this file - reload from disk to discard changes
pane.update_in(cx, |pane, _, cx| {
if pane.is_tab_pinned(item_ix) && !item.can_save(cx) {
pane.pinned_tab_count -= 1;
}
})
.log_err();
+ if can_save && is_singleton {
+ pane.update_in(cx, |_, window, cx| {
+ item.reload(project.clone(), window, cx)
+ })?
+ .await
+ .log_err();
+ }
return Ok(true);
}
_ => return Ok(false), // Cancel
@@ -3280,12 +3289,11 @@ impl Pane {
}
fn render_tab_bar(&mut self, window: &mut Window, cx: &mut Context<Pane>) -> AnyElement {
- let Some(workspace) = self.workspace.upgrade() else {
+ if self.workspace.upgrade().is_none() {
return gpui::Empty.into_any();
- };
+ }
let focus_handle = self.focus_handle.clone();
- let is_pane_focused = self.has_focus(window, cx);
let navigate_backward = IconButton::new("navigate_backward", IconName::ArrowLeft)
.icon_size(IconSize::Small)
@@ -3310,70 +3318,6 @@ impl Pane {
}
});
- let open_aside_left = {
- let workspace = workspace.read(cx);
- workspace.utility_pane(UtilityPaneSlot::Left).map(|pane| {
- let toggle_icon = pane.toggle_icon(cx);
- let workspace_handle = self.workspace.clone();
-
- h_flex()
- .h_full()
- .pr_1p5()
- .border_r_1()
- .border_color(cx.theme().colors().border)
- .child(
- IconButton::new("open_aside_left", toggle_icon)
- .icon_size(IconSize::Small)
- .tooltip(Tooltip::text("Toggle Agent Pane")) // TODO: Probably want to make this generic
- .on_click(move |_, window, cx| {
- workspace_handle
- .update(cx, |workspace, cx| {
- workspace.toggle_utility_pane(
- UtilityPaneSlot::Left,
- window,
- cx,
- )
- })
- .ok();
- }),
- )
- .into_any_element()
- })
- };
-
- let open_aside_right = {
- let workspace = workspace.read(cx);
- workspace.utility_pane(UtilityPaneSlot::Right).map(|pane| {
- let toggle_icon = pane.toggle_icon(cx);
- let workspace_handle = self.workspace.clone();
-
- h_flex()
- .h_full()
- .when(is_pane_focused, |this| {
- this.pl(DynamicSpacing::Base04.rems(cx))
- .border_l_1()
- .border_color(cx.theme().colors().border)
- })
- .child(
- IconButton::new("open_aside_right", toggle_icon)
- .icon_size(IconSize::Small)
- .tooltip(Tooltip::text("Toggle Agent Pane")) // TODO: Probably want to make this generic
- .on_click(move |_, window, cx| {
- workspace_handle
- .update(cx, |workspace, cx| {
- workspace.toggle_utility_pane(
- UtilityPaneSlot::Right,
- window,
- cx,
- )
- })
- .ok();
- }),
- )
- .into_any_element()
- })
- };
-
let navigate_forward = IconButton::new("navigate_forward", IconName::ArrowRight)
.icon_size(IconSize::Small)
.on_click({
@@ -3421,34 +3365,6 @@ impl Pane {
let unpinned_tabs = tab_items.split_off(self.pinned_tab_count);
let pinned_tabs = tab_items;
- let render_aside_toggle_left = cx.has_flag::<AgentV2FeatureFlag>()
- && self
- .is_upper_left
- .then(|| {
- self.workspace.upgrade().and_then(|entity| {
- let workspace = entity.read(cx);
- workspace
- .utility_pane(UtilityPaneSlot::Left)
- .map(|pane| !pane.expanded(cx))
- })
- })
- .flatten()
- .unwrap_or(false);
-
- let render_aside_toggle_right = cx.has_flag::<AgentV2FeatureFlag>()
- && self
- .is_upper_right
- .then(|| {
- self.workspace.upgrade().and_then(|entity| {
- let workspace = entity.read(cx);
- workspace
- .utility_pane(UtilityPaneSlot::Right)
- .map(|pane| !pane.expanded(cx))
- })
- })
- .flatten()
- .unwrap_or(false);
-
let tab_bar_settings = TabBarSettings::get_global(cx);
let use_separate_rows = tab_bar_settings.show_pinned_tabs_in_separate_row;
@@ -3459,10 +3375,6 @@ impl Pane {
tab_count,
navigate_backward,
navigate_forward,
- open_aside_left,
- open_aside_right,
- render_aside_toggle_left,
- render_aside_toggle_right,
window,
cx,
)
@@ -3473,10 +3385,6 @@ impl Pane {
tab_count,
navigate_backward,
navigate_forward,
- open_aside_left,
- open_aside_right,
- render_aside_toggle_left,
- render_aside_toggle_right,
window,
cx,
)
@@ -3488,21 +3396,10 @@ impl Pane {
tab_bar: TabBar,
navigate_backward: IconButton,
navigate_forward: IconButton,
- open_aside_left: Option<AnyElement>,
- render_aside_toggle_left: bool,
window: &mut Window,
cx: &mut Context<Pane>,
) -> TabBar {
tab_bar
- .map(|tab_bar| {
- if let Some(open_aside_left) = open_aside_left
- && render_aside_toggle_left
- {
- tab_bar.start_child(open_aside_left)
- } else {
- tab_bar
- }
- })
.when(
self.display_nav_history_buttons.unwrap_or_default(),
|tab_bar| {
@@ -3524,22 +3421,6 @@ impl Pane {
})
}
- fn configure_tab_bar_end(
- tab_bar: TabBar,
- open_aside_right: Option<AnyElement>,
- render_aside_toggle_right: bool,
- ) -> TabBar {
- tab_bar.map(|tab_bar| {
- if let Some(open_aside_right) = open_aside_right
- && render_aside_toggle_right
- {
- tab_bar.end_child(open_aside_right)
- } else {
- tab_bar
- }
- })
- }
-
fn render_single_row_tab_bar(
&mut self,
pinned_tabs: Vec<AnyElement>,
@@ -3547,10 +3428,6 @@ impl Pane {
tab_count: usize,
navigate_backward: IconButton,
navigate_forward: IconButton,
- open_aside_left: Option<AnyElement>,
- open_aside_right: Option<AnyElement>,
- render_aside_toggle_left: bool,
- render_aside_toggle_right: bool,
window: &mut Window,
cx: &mut Context<Pane>,
) -> AnyElement {
@@ -3559,8 +3436,6 @@ impl Pane {
TabBar::new("tab_bar"),
navigate_backward,
navigate_forward,
- open_aside_left,
- render_aside_toggle_left,
window,
cx,
)
@@ -3581,8 +3456,7 @@ impl Pane {
})
}))
.child(self.render_unpinned_tabs_container(unpinned_tabs, tab_count, cx));
- Self::configure_tab_bar_end(tab_bar, open_aside_right, render_aside_toggle_right)
- .into_any_element()
+ tab_bar.into_any_element()
}
fn render_two_row_tab_bar(
@@ -3592,10 +3466,6 @@ impl Pane {
tab_count: usize,
navigate_backward: IconButton,
navigate_forward: IconButton,
- open_aside_left: Option<AnyElement>,
- open_aside_right: Option<AnyElement>,
- render_aside_toggle_left: bool,
- render_aside_toggle_right: bool,
window: &mut Window,
cx: &mut Context<Pane>,
) -> AnyElement {
@@ -3604,8 +3474,6 @@ impl Pane {
TabBar::new("pinned_tab_bar"),
navigate_backward,
navigate_forward,
- open_aside_left,
- render_aside_toggle_left,
window,
cx,
)
@@ -3617,12 +3485,6 @@ impl Pane {
.w_full()
.children(pinned_tabs),
);
- let pinned_tab_bar = Self::configure_tab_bar_end(
- pinned_tab_bar,
- open_aside_right,
- render_aside_toggle_right,
- );
-
v_flex()
.w_full()
.flex_none()
@@ -7357,6 +7219,166 @@ mod tests {
assert_item_labels(&pane, ["Dirty*^"], cx);
}
+ #[gpui::test]
+ async fn test_discard_all_reloads_from_disk(cx: &mut TestAppContext) {
+ init_test(cx);
+ let fs = FakeFs::new(cx.executor());
+
+ let project = Project::test(fs, None, cx).await;
+ let (workspace, cx) =
+ cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
+ let pane = workspace.read_with(cx, |workspace, _| workspace.active_pane().clone());
+
+ let item_a = add_labeled_item(&pane, "A", true, cx);
+ item_a.update(cx, |item, cx| {
+ item.project_items
+ .push(TestProjectItem::new_dirty(1, "A.txt", cx))
+ });
+ let item_b = add_labeled_item(&pane, "B", true, cx);
+ item_b.update(cx, |item, cx| {
+ item.project_items
+ .push(TestProjectItem::new_dirty(2, "B.txt", cx))
+ });
+ assert_item_labels(&pane, ["A^", "B*^"], cx);
+
+ let close_task = pane.update_in(cx, |pane, window, cx| {
+ pane.close_all_items(
+ &CloseAllItems {
+ save_intent: None,
+ close_pinned: false,
+ },
+ window,
+ cx,
+ )
+ });
+
+ cx.executor().run_until_parked();
+ cx.simulate_prompt_answer("Discard all");
+ close_task.await.unwrap();
+ assert_item_labels(&pane, [], cx);
+
+ item_a.read_with(cx, |item, _| {
+ assert_eq!(item.reload_count, 1, "item A should have been reloaded");
+ assert!(
+ !item.is_dirty,
+ "item A should no longer be dirty after reload"
+ );
+ });
+ item_b.read_with(cx, |item, _| {
+ assert_eq!(item.reload_count, 1, "item B should have been reloaded");
+ assert!(
+ !item.is_dirty,
+ "item B should no longer be dirty after reload"
+ );
+ });
+ }
+
+ #[gpui::test]
+ async fn test_dont_save_single_file_reloads_from_disk(cx: &mut TestAppContext) {
+ init_test(cx);
+ let fs = FakeFs::new(cx.executor());
+
+ let project = Project::test(fs, None, cx).await;
+ let (workspace, cx) =
+ cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
+ let pane = workspace.read_with(cx, |workspace, _| workspace.active_pane().clone());
+
+ let item = add_labeled_item(&pane, "Dirty", true, cx);
+ item.update(cx, |item, cx| {
+ item.project_items
+ .push(TestProjectItem::new_dirty(1, "Dirty.txt", cx))
+ });
+ assert_item_labels(&pane, ["Dirty*^"], cx);
+
+ let close_task = pane.update_in(cx, |pane, window, cx| {
+ pane.close_item_by_id(item.item_id(), SaveIntent::Close, window, cx)
+ });
+
+ cx.executor().run_until_parked();
+ cx.simulate_prompt_answer("Don't Save");
+ close_task.await.unwrap();
+ assert_item_labels(&pane, [], cx);
+
+ item.read_with(cx, |item, _| {
+ assert_eq!(item.reload_count, 1, "item should have been reloaded");
+ assert!(
+ !item.is_dirty,
+ "item should no longer be dirty after reload"
+ );
+ });
+ }
+
+ #[gpui::test]
+ async fn test_discard_does_not_reload_multibuffer(cx: &mut TestAppContext) {
+ init_test(cx);
+ let fs = FakeFs::new(cx.executor());
+
+ let project = Project::test(fs, None, cx).await;
+ let (workspace, cx) =
+ cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
+ let pane = workspace.read_with(cx, |workspace, _| workspace.active_pane().clone());
+
+ let singleton_item = pane.update_in(cx, |pane, window, cx| {
+ let item = Box::new(cx.new(|cx| {
+ TestItem::new(cx)
+ .with_label("Singleton")
+ .with_dirty(true)
+ .with_buffer_kind(ItemBufferKind::Singleton)
+ }));
+ pane.add_item(item.clone(), false, false, None, window, cx);
+ item
+ });
+ singleton_item.update(cx, |item, cx| {
+ item.project_items
+ .push(TestProjectItem::new_dirty(1, "Singleton.txt", cx))
+ });
+
+ let multi_item = pane.update_in(cx, |pane, window, cx| {
+ let item = Box::new(cx.new(|cx| {
+ TestItem::new(cx)
+ .with_label("Multi")
+ .with_dirty(true)
+ .with_buffer_kind(ItemBufferKind::Multibuffer)
+ }));
+ pane.add_item(item.clone(), false, false, None, window, cx);
+ item
+ });
+ multi_item.update(cx, |item, cx| {
+ item.project_items
+ .push(TestProjectItem::new_dirty(2, "Multi.txt", cx))
+ });
+
+ let close_task = pane.update_in(cx, |pane, window, cx| {
+ pane.close_all_items(
+ &CloseAllItems {
+ save_intent: None,
+ close_pinned: false,
+ },
+ window,
+ cx,
+ )
+ });
+
+ cx.executor().run_until_parked();
+ cx.simulate_prompt_answer("Discard all");
+ close_task.await.unwrap();
+ assert_item_labels(&pane, [], cx);
+
+ singleton_item.read_with(cx, |item, _| {
+ assert_eq!(item.reload_count, 1, "singleton should have been reloaded");
+ assert!(
+ !item.is_dirty,
+ "singleton should no longer be dirty after reload"
+ );
+ });
+ multi_item.read_with(cx, |item, _| {
+ assert_eq!(
+ item.reload_count, 0,
+ "multibuffer should not have been reloaded"
+ );
+ });
+ }
+
#[gpui::test]
async fn test_close_multibuffer_items(cx: &mut TestAppContext) {
init_test(cx);
@@ -7561,8 +7583,8 @@ mod tests {
let scroll_bounds = tab_bar_scroll_handle.bounds();
let scroll_offset = tab_bar_scroll_handle.offset();
assert!(tab_bounds.right() <= scroll_bounds.right());
- // -43.0 is the magic number for this setup
- assert_eq!(scroll_offset.x, px(-43.0));
+ // -38.5 is the magic number for this setup
+ assert_eq!(scroll_offset.x, px(-38.5));
assert!(
!tab_bounds.intersects(&new_tab_button_bounds),
"Tab should not overlap with the new tab button, if this is failing check if there's been a redesign!"
@@ -206,7 +206,7 @@ impl PaneGroup {
}
pub fn mark_positions(&mut self, cx: &mut App) {
- self.root.mark_positions(self.is_center, true, true, cx);
+ self.root.mark_positions(self.is_center, cx);
}
pub fn render(
@@ -278,37 +278,15 @@ pub enum Member {
}
impl Member {
- pub fn mark_positions(
- &mut self,
- in_center_group: bool,
- is_upper_left: bool,
- is_upper_right: bool,
- cx: &mut App,
- ) {
+ pub fn mark_positions(&mut self, in_center_group: bool, cx: &mut App) {
match self {
Member::Axis(pane_axis) => {
- let len = pane_axis.members.len();
- for (idx, member) in pane_axis.members.iter_mut().enumerate() {
- let member_upper_left = match pane_axis.axis {
- Axis::Vertical => is_upper_left && idx == 0,
- Axis::Horizontal => is_upper_left && idx == 0,
- };
- let member_upper_right = match pane_axis.axis {
- Axis::Vertical => is_upper_right && idx == 0,
- Axis::Horizontal => is_upper_right && idx == len - 1,
- };
- member.mark_positions(
- in_center_group,
- member_upper_left,
- member_upper_right,
- cx,
- );
+ for member in pane_axis.members.iter_mut() {
+ member.mark_positions(in_center_group, cx);
}
}
Member::Pane(entity) => entity.update(cx, |pane, _| {
pane.in_center_group = in_center_group;
- pane.is_upper_left = is_upper_left;
- pane.is_upper_right = is_upper_right;
}),
}
}
@@ -1,282 +0,0 @@
-use gpui::{
- AppContext as _, EntityId, MouseButton, Pixels, Render, StatefulInteractiveElement,
- Subscription, WeakEntity, deferred, px,
-};
-use ui::{
- ActiveTheme as _, Context, FluentBuilder as _, InteractiveElement as _, IntoElement,
- ParentElement as _, RenderOnce, Styled as _, Window, div,
-};
-
-use crate::{
- DockPosition, Workspace,
- dock::{ClosePane, MinimizePane, UtilityPane, UtilityPaneHandle},
-};
-
-pub(crate) const UTILITY_PANE_RESIZE_HANDLE_SIZE: Pixels = px(6.0);
-pub(crate) const UTILITY_PANE_MIN_WIDTH: Pixels = px(20.0);
-
-#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
-pub enum UtilityPaneSlot {
- Left,
- Right,
-}
-
-struct UtilityPaneSlotState {
- panel_id: EntityId,
- utility_pane: Box<dyn UtilityPaneHandle>,
- _subscriptions: Vec<Subscription>,
-}
-
-#[derive(Default)]
-pub struct UtilityPaneState {
- left_slot: Option<UtilityPaneSlotState>,
- right_slot: Option<UtilityPaneSlotState>,
-}
-
-#[derive(Clone)]
-pub struct DraggedUtilityPane(pub UtilityPaneSlot);
-
-impl Render for DraggedUtilityPane {
- fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
- gpui::Empty
- }
-}
-
-pub fn utility_slot_for_dock_position(position: DockPosition) -> UtilityPaneSlot {
- match position {
- DockPosition::Left => UtilityPaneSlot::Left,
- DockPosition::Right => UtilityPaneSlot::Right,
- DockPosition::Bottom => UtilityPaneSlot::Left,
- }
-}
-
-impl Workspace {
- pub fn utility_pane(&self, slot: UtilityPaneSlot) -> Option<&dyn UtilityPaneHandle> {
- match slot {
- UtilityPaneSlot::Left => self
- .utility_panes
- .left_slot
- .as_ref()
- .map(|s| s.utility_pane.as_ref()),
- UtilityPaneSlot::Right => self
- .utility_panes
- .right_slot
- .as_ref()
- .map(|s| s.utility_pane.as_ref()),
- }
- }
-
- pub fn toggle_utility_pane(
- &mut self,
- slot: UtilityPaneSlot,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- if let Some(handle) = self.utility_pane(slot) {
- let current = handle.expanded(cx);
- handle.set_expanded(!current, cx);
- }
- cx.notify();
- self.serialize_workspace(window, cx);
- }
-
- pub fn register_utility_pane<T: UtilityPane>(
- &mut self,
- slot: UtilityPaneSlot,
- panel_id: EntityId,
- handle: gpui::Entity<T>,
- cx: &mut Context<Self>,
- ) {
- let minimize_subscription =
- cx.subscribe(&handle, move |this, _, _event: &MinimizePane, cx| {
- if let Some(handle) = this.utility_pane(slot) {
- handle.set_expanded(false, cx);
- }
- cx.notify();
- });
-
- let close_subscription = cx.subscribe(&handle, move |this, _, _event: &ClosePane, cx| {
- this.clear_utility_pane(slot, cx);
- });
-
- let subscriptions = vec![minimize_subscription, close_subscription];
- let boxed_handle: Box<dyn UtilityPaneHandle> = Box::new(handle);
-
- match slot {
- UtilityPaneSlot::Left => {
- self.utility_panes.left_slot = Some(UtilityPaneSlotState {
- panel_id,
- utility_pane: boxed_handle,
- _subscriptions: subscriptions,
- });
- }
- UtilityPaneSlot::Right => {
- self.utility_panes.right_slot = Some(UtilityPaneSlotState {
- panel_id,
- utility_pane: boxed_handle,
- _subscriptions: subscriptions,
- });
- }
- }
- cx.notify();
- }
-
- pub fn clear_utility_pane(&mut self, slot: UtilityPaneSlot, cx: &mut Context<Self>) {
- match slot {
- UtilityPaneSlot::Left => {
- self.utility_panes.left_slot = None;
- }
- UtilityPaneSlot::Right => {
- self.utility_panes.right_slot = None;
- }
- }
- cx.notify();
- }
-
- pub fn clear_utility_pane_if_provider(
- &mut self,
- slot: UtilityPaneSlot,
- provider_panel_id: EntityId,
- cx: &mut Context<Self>,
- ) {
- let should_clear = match slot {
- UtilityPaneSlot::Left => self
- .utility_panes
- .left_slot
- .as_ref()
- .is_some_and(|slot| slot.panel_id == provider_panel_id),
- UtilityPaneSlot::Right => self
- .utility_panes
- .right_slot
- .as_ref()
- .is_some_and(|slot| slot.panel_id == provider_panel_id),
- };
-
- if should_clear {
- self.clear_utility_pane(slot, cx);
- }
- }
-
- pub fn resize_utility_pane(
- &mut self,
- slot: UtilityPaneSlot,
- new_width: Pixels,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- if let Some(handle) = self.utility_pane(slot) {
- let max_width = self.max_utility_pane_width(window, cx);
- let width = new_width.max(UTILITY_PANE_MIN_WIDTH).min(max_width);
- handle.set_width(Some(width), cx);
- cx.notify();
- self.serialize_workspace(window, cx);
- }
- }
-
- pub fn reset_utility_pane_width(
- &mut self,
- slot: UtilityPaneSlot,
- window: &mut Window,
- cx: &mut Context<Self>,
- ) {
- if let Some(handle) = self.utility_pane(slot) {
- handle.set_width(None, cx);
- cx.notify();
- self.serialize_workspace(window, cx);
- }
- }
-}
-
-#[derive(IntoElement)]
-pub struct UtilityPaneFrame {
- workspace: WeakEntity<Workspace>,
- slot: UtilityPaneSlot,
- handle: Box<dyn UtilityPaneHandle>,
-}
-
-impl UtilityPaneFrame {
- pub fn new(
- slot: UtilityPaneSlot,
- handle: Box<dyn UtilityPaneHandle>,
- cx: &mut Context<Workspace>,
- ) -> Self {
- let workspace = cx.weak_entity();
- Self {
- workspace,
- slot,
- handle,
- }
- }
-}
-
-impl RenderOnce for UtilityPaneFrame {
- fn render(self, _window: &mut Window, cx: &mut ui::App) -> impl IntoElement {
- let workspace = self.workspace.clone();
- let slot = self.slot;
- let width = self.handle.width(cx);
-
- let create_resize_handle = || {
- let workspace_handle = workspace.clone();
- let handle = div()
- .id(match slot {
- UtilityPaneSlot::Left => "utility-pane-resize-handle-left",
- UtilityPaneSlot::Right => "utility-pane-resize-handle-right",
- })
- .on_drag(DraggedUtilityPane(slot), move |pane, _, _, cx| {
- cx.stop_propagation();
- cx.new(|_| pane.clone())
- })
- .on_mouse_down(MouseButton::Left, move |_, _, cx| {
- cx.stop_propagation();
- })
- .on_mouse_up(
- MouseButton::Left,
- move |e: &gpui::MouseUpEvent, window, cx| {
- if e.click_count == 2 {
- workspace_handle
- .update(cx, |workspace, cx| {
- workspace.reset_utility_pane_width(slot, window, cx);
- })
- .ok();
- cx.stop_propagation();
- }
- },
- )
- .occlude();
-
- match slot {
- UtilityPaneSlot::Left => deferred(
- handle
- .absolute()
- .right(-UTILITY_PANE_RESIZE_HANDLE_SIZE / 2.)
- .top(px(0.))
- .h_full()
- .w(UTILITY_PANE_RESIZE_HANDLE_SIZE)
- .cursor_col_resize(),
- ),
- UtilityPaneSlot::Right => deferred(
- handle
- .absolute()
- .left(-UTILITY_PANE_RESIZE_HANDLE_SIZE / 2.)
- .top(px(0.))
- .h_full()
- .w(UTILITY_PANE_RESIZE_HANDLE_SIZE)
- .cursor_col_resize(),
- ),
- }
- };
-
- div()
- .h_full()
- .bg(cx.theme().colors().tab_bar_background)
- .w(width)
- .border_color(cx.theme().colors().border)
- .when(self.slot == UtilityPaneSlot::Left, |this| this.border_r_1())
- .when(self.slot == UtilityPaneSlot::Right, |this| {
- this.border_l_1()
- })
- .child(create_resize_handle())
- .child(self.handle.to_any())
- .into_any_element()
- }
-}
@@ -17,7 +17,6 @@ pub mod tasks;
mod theme_preview;
mod toast_layer;
mod toolbar;
-pub mod utility_pane;
pub mod welcome;
mod workspace_settings;
@@ -39,7 +38,6 @@ use client::{
};
use collections::{HashMap, HashSet, hash_map};
use dock::{Dock, DockPosition, PanelButtons, PanelHandle, RESIZE_HANDLE_SIZE};
-use feature_flags::{AgentV2FeatureFlag, FeatureFlagAppExt};
use futures::{
Future, FutureExt, StreamExt,
channel::{
@@ -143,18 +141,13 @@ pub use workspace_settings::{
};
use zed_actions::{Spawn, feedback::FileBugReport};
-use crate::{
- item::ItemBufferKind,
- notifications::NotificationId,
- utility_pane::{UTILITY_PANE_MIN_WIDTH, utility_slot_for_dock_position},
-};
+use crate::{item::ItemBufferKind, notifications::NotificationId};
use crate::{
persistence::{
SerializedAxis,
model::{DockData, DockStructure, SerializedItem, SerializedPane, SerializedPaneGroup},
},
security_modal::SecurityModal,
- utility_pane::{DraggedUtilityPane, UtilityPaneFrame, UtilityPaneSlot, UtilityPaneState},
};
pub const SERIALIZATION_THROTTLE_TIME: Duration = Duration::from_millis(200);
@@ -1266,7 +1259,6 @@ pub struct Workspace {
scheduled_tasks: Vec<Task<()>>,
last_open_dock_positions: Vec<DockPosition>,
removing: bool,
- utility_panes: UtilityPaneState,
}
impl EventEmitter<Event> for Workspace {}
@@ -1695,7 +1687,6 @@ impl Workspace {
scheduled_tasks: Vec::new(),
last_open_dock_positions: Vec::new(),
removing: false,
- utility_panes: UtilityPaneState::default(),
}
}
@@ -2022,18 +2013,8 @@ impl Workspace {
window: &mut Window,
cx: &mut Context<Self>,
) {
- let mut found_in_dock = None;
for dock in [&self.left_dock, &self.bottom_dock, &self.right_dock] {
- let found = dock.update(cx, |dock, cx| dock.remove_panel(panel, window, cx));
-
- if found {
- found_in_dock = Some(dock.clone());
- }
- }
- if let Some(found_in_dock) = found_in_dock {
- let position = found_in_dock.read(cx).position();
- let slot = utility_slot_for_dock_position(position);
- self.clear_utility_pane_if_provider(slot, Entity::entity_id(panel), cx);
+ dock.update(cx, |dock, cx| dock.remove_panel(panel, window, cx));
}
}
@@ -4559,16 +4540,18 @@ impl Workspace {
// If this pane is in a dock, preserve that dock when dismissing zoomed items.
// This prevents the dock from closing when focus events fire during window activation.
+ // We also preserve any dock whose active panel itself has focus — this covers
+ // panels like AgentPanel that don't implement `pane()` but can still be zoomed.
let dock_to_preserve = self.all_docks().iter().find_map(|dock| {
let dock_read = dock.read(cx);
- if let Some(panel) = dock_read.active_panel()
- && let Some(dock_pane) = panel.pane(cx)
- && dock_pane == pane
- {
- Some(dock_read.position())
- } else {
- None
+ if let Some(panel) = dock_read.active_panel() {
+ if panel.pane(cx).is_some_and(|dock_pane| dock_pane == pane)
+ || panel.panel_focus_handle(cx).contains_focused(window, cx)
+ {
+ return Some(dock_read.position());
+ }
}
+ None
});
self.dismiss_zoomed_items_to_reveal(dock_to_preserve, window, cx);
@@ -6903,7 +6886,6 @@ impl Workspace {
left_dock.resize_active_panel(Some(size), window, cx);
}
});
- self.clamp_utility_pane_widths(window, cx);
}
fn resize_right_dock(&mut self, new_size: Pixels, window: &mut Window, cx: &mut App) {
@@ -6926,7 +6908,6 @@ impl Workspace {
right_dock.resize_active_panel(Some(size), window, cx);
}
});
- self.clamp_utility_pane_widths(window, cx);
}
fn resize_bottom_dock(&mut self, new_size: Pixels, window: &mut Window, cx: &mut App) {
@@ -6941,42 +6922,6 @@ impl Workspace {
bottom_dock.resize_active_panel(Some(size), window, cx);
}
});
- self.clamp_utility_pane_widths(window, cx);
- }
-
- fn max_utility_pane_width(&self, window: &Window, cx: &App) -> Pixels {
- let left_dock_width = self
- .left_dock
- .read(cx)
- .active_panel_size(window, cx)
- .unwrap_or(px(0.0));
- let right_dock_width = self
- .right_dock
- .read(cx)
- .active_panel_size(window, cx)
- .unwrap_or(px(0.0));
- let center_pane_width = self.bounds.size.width - left_dock_width - right_dock_width;
- center_pane_width - px(10.0)
- }
-
- fn clamp_utility_pane_widths(&mut self, window: &mut Window, cx: &mut App) {
- let max_width = self.max_utility_pane_width(window, cx);
-
- // Clamp left slot utility pane if it exists
- if let Some(handle) = self.utility_pane(UtilityPaneSlot::Left) {
- let current_width = handle.width(cx);
- if current_width > max_width {
- handle.set_width(Some(max_width.max(UTILITY_PANE_MIN_WIDTH)), cx);
- }
- }
-
- // Clamp right slot utility pane if it exists
- if let Some(handle) = self.utility_pane(UtilityPaneSlot::Right) {
- let current_width = handle.width(cx);
- if current_width > max_width {
- handle.set_width(Some(max_width.max(UTILITY_PANE_MIN_WIDTH)), cx);
- }
- }
}
fn toggle_edit_predictions_all_files(
@@ -7483,34 +7428,7 @@ impl Render for Workspace {
}
},
))
- .on_drag_move(cx.listener(
- move |workspace,
- e: &DragMoveEvent<DraggedUtilityPane>,
- window,
- cx| {
- let slot = e.drag(cx).0;
- match slot {
- UtilityPaneSlot::Left => {
- let left_dock_width = workspace.left_dock.read(cx)
- .active_panel_size(window, cx)
- .unwrap_or(gpui::px(0.0));
- let new_width = e.event.position.x
- - workspace.bounds.left()
- - left_dock_width;
- workspace.resize_utility_pane(slot, new_width, window, cx);
- }
- UtilityPaneSlot::Right => {
- let right_dock_width = workspace.right_dock.read(cx)
- .active_panel_size(window, cx)
- .unwrap_or(gpui::px(0.0));
- let new_width = workspace.bounds.right()
- - e.event.position.x
- - right_dock_width;
- workspace.resize_utility_pane(slot, new_width, window, cx);
- }
- }
- },
- ))
+
})
.child({
match bottom_dock_layout {
@@ -7530,15 +7448,7 @@ impl Render for Workspace {
window,
cx,
))
- .when(cx.has_flag::<AgentV2FeatureFlag>(), |this| {
- this.when_some(self.utility_pane(UtilityPaneSlot::Left), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Left, pane.box_clone(), cx)
- )
- })
- })
- })
+
.child(
div()
.flex()
@@ -7580,15 +7490,7 @@ impl Render for Workspace {
),
),
)
- .when(cx.has_flag::<AgentV2FeatureFlag>(), |this| {
- this.when_some(self.utility_pane(UtilityPaneSlot::Right), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Right, pane.box_clone(), cx)
- )
- })
- })
- })
+
.children(self.render_dock(
DockPosition::Right,
&self.right_dock,
@@ -7619,15 +7521,7 @@ impl Render for Workspace {
.flex_row()
.flex_1()
.children(self.render_dock(DockPosition::Left, &self.left_dock, window, cx))
- .when(cx.has_flag::<AgentV2FeatureFlag>(), |this| {
- this.when_some(self.utility_pane(UtilityPaneSlot::Left), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Left, pane.box_clone(), cx)
- )
- })
- })
- })
+
.child(
div()
.flex()
@@ -7655,13 +7549,7 @@ impl Render for Workspace {
.when_some(paddings.1, |this, p| this.child(p.border_l_1())),
)
)
- .when_some(self.utility_pane(UtilityPaneSlot::Right), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Right, pane.box_clone(), cx)
- )
- })
- })
+
)
.child(
div()
@@ -7686,15 +7574,7 @@ impl Render for Workspace {
window,
cx,
))
- .when(cx.has_flag::<AgentV2FeatureFlag>(), |this| {
- this.when_some(self.utility_pane(UtilityPaneSlot::Left), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Left, pane.box_clone(), cx)
- )
- })
- })
- })
+
.child(
div()
.flex()
@@ -7733,15 +7613,7 @@ impl Render for Workspace {
.when_some(paddings.1, |this, p| this.child(p.border_l_1())),
)
)
- .when(cx.has_flag::<AgentV2FeatureFlag>(), |this| {
- this.when_some(self.utility_pane(UtilityPaneSlot::Right), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Right, pane.box_clone(), cx)
- )
- })
- })
- })
+
.children(self.render_dock(DockPosition::Right, &self.right_dock, window, cx))
)
.child(
@@ -7761,13 +7633,7 @@ impl Render for Workspace {
window,
cx,
))
- .when_some(self.utility_pane(UtilityPaneSlot::Left), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Left, pane.box_clone(), cx)
- )
- })
- })
+
.child(
div()
.flex()
@@ -7805,15 +7671,7 @@ impl Render for Workspace {
cx,
)),
)
- .when(cx.has_flag::<AgentV2FeatureFlag>(), |this| {
- this.when_some(self.utility_pane(UtilityPaneSlot::Right), |this, pane| {
- this.when(pane.expanded(cx), |this| {
- this.child(
- UtilityPaneFrame::new(UtilityPaneSlot::Right, pane.box_clone(), cx)
- )
- })
- })
- })
+
.children(self.render_dock(
DockPosition::Right,
&self.right_dock,
@@ -12868,6 +12726,101 @@ mod tests {
});
}
+ #[gpui::test]
+ async fn test_panel_zoom_preserved_across_workspace_switch(cx: &mut TestAppContext) {
+ init_test(cx);
+ let fs = FakeFs::new(cx.executor());
+
+ let project_a = Project::test(fs.clone(), [], cx).await;
+ let project_b = Project::test(fs, [], cx).await;
+
+ let multi_workspace_handle =
+ cx.add_window(|window, cx| MultiWorkspace::test_new(project_a.clone(), window, cx));
+
+ let workspace_a = multi_workspace_handle
+ .read_with(cx, |mw, _| mw.workspace().clone())
+ .unwrap();
+
+ let _workspace_b = multi_workspace_handle
+ .update(cx, |mw, window, cx| {
+ mw.test_add_workspace(project_b, window, cx)
+ })
+ .unwrap();
+
+ // Switch to workspace A
+ multi_workspace_handle
+ .update(cx, |mw, window, cx| {
+ mw.activate_index(0, window, cx);
+ })
+ .unwrap();
+
+ let cx = &mut VisualTestContext::from_window(multi_workspace_handle.into(), cx);
+
+ // Add a panel to workspace A's right dock and open the dock
+ let panel = workspace_a.update_in(cx, |workspace, window, cx| {
+ let panel = cx.new(|cx| TestPanel::new(DockPosition::Right, 100, cx));
+ workspace.add_panel(panel.clone(), window, cx);
+ workspace
+ .right_dock()
+ .update(cx, |dock, cx| dock.set_open(true, window, cx));
+ panel
+ });
+
+ // Focus the panel through the workspace (matching existing test pattern)
+ workspace_a.update_in(cx, |workspace, window, cx| {
+ workspace.toggle_panel_focus::<TestPanel>(window, cx);
+ });
+
+ // Zoom the panel
+ panel.update_in(cx, |panel, window, cx| {
+ panel.set_zoomed(true, window, cx);
+ });
+
+ // Verify the panel is zoomed and the dock is open
+ workspace_a.update_in(cx, |workspace, window, cx| {
+ assert!(
+ workspace.right_dock().read(cx).is_open(),
+ "dock should be open before switch"
+ );
+ assert!(
+ panel.is_zoomed(window, cx),
+ "panel should be zoomed before switch"
+ );
+ assert!(
+ panel.read(cx).focus_handle(cx).contains_focused(window, cx),
+ "panel should be focused before switch"
+ );
+ });
+
+ // Switch to workspace B
+ multi_workspace_handle
+ .update(cx, |mw, window, cx| {
+ mw.activate_index(1, window, cx);
+ })
+ .unwrap();
+ cx.run_until_parked();
+
+ // Switch back to workspace A
+ multi_workspace_handle
+ .update(cx, |mw, window, cx| {
+ mw.activate_index(0, window, cx);
+ })
+ .unwrap();
+ cx.run_until_parked();
+
+ // Verify the panel is still zoomed and the dock is still open
+ workspace_a.update_in(cx, |workspace, window, cx| {
+ assert!(
+ workspace.right_dock().read(cx).is_open(),
+ "dock should still be open after switching back"
+ );
+ assert!(
+ panel.is_zoomed(window, cx),
+ "panel should still be zoomed after switching back"
+ );
+ });
+ }
+
fn pane_items_paths(pane: &Entity<Pane>, cx: &App) -> Vec<String> {
pane.read(cx)
.items()
@@ -12894,4 +12847,67 @@ mod tests {
});
item
}
+
+ #[gpui::test]
+ async fn test_zoomed_panel_without_pane_preserved_on_center_focus(
+ cx: &mut gpui::TestAppContext,
+ ) {
+ init_test(cx);
+ let fs = FakeFs::new(cx.executor());
+
+ let project = Project::test(fs, [], cx).await;
+ let (workspace, cx) =
+ cx.add_window_view(|window, cx| Workspace::test_new(project, window, cx));
+
+ let panel = workspace.update_in(cx, |workspace, window, cx| {
+ let panel = cx.new(|cx| TestPanel::new(DockPosition::Right, 100, cx));
+ workspace.add_panel(panel.clone(), window, cx);
+ workspace
+ .right_dock()
+ .update(cx, |dock, cx| dock.set_open(true, window, cx));
+ panel
+ });
+
+ let pane = workspace.read_with(cx, |workspace, _| workspace.active_pane().clone());
+ pane.update_in(cx, |pane, window, cx| {
+ let item = cx.new(TestItem::new);
+ pane.add_item(Box::new(item), true, true, None, window, cx);
+ });
+
+ // Transfer focus to the panel, then zoom it. Using toggle_panel_focus
+ // mirrors the real-world flow and avoids side effects from directly
+ // focusing the panel while the center pane is active.
+ workspace.update_in(cx, |workspace, window, cx| {
+ workspace.toggle_panel_focus::<TestPanel>(window, cx);
+ });
+
+ panel.update_in(cx, |panel, window, cx| {
+ panel.set_zoomed(true, window, cx);
+ });
+
+ workspace.update_in(cx, |workspace, window, cx| {
+ assert!(workspace.right_dock().read(cx).is_open());
+ assert!(panel.is_zoomed(window, cx));
+ assert!(panel.read(cx).focus_handle(cx).contains_focused(window, cx));
+ });
+
+ // Simulate a spurious pane::Event::Focus on the center pane while the
+ // panel still has focus. This mirrors what happens during macOS window
+ // activation: the center pane fires a focus event even though actual
+ // focus remains on the dock panel.
+ pane.update_in(cx, |_, _, cx| {
+ cx.emit(pane::Event::Focus);
+ });
+
+ // The dock must remain open because the panel had focus at the time the
+ // event was processed. Before the fix, dock_to_preserve was None for
+ // panels that don't implement pane(), causing the dock to close.
+ workspace.update_in(cx, |workspace, window, cx| {
+ assert!(
+ workspace.right_dock().read(cx).is_open(),
+ "Dock should stay open when its zoomed panel (without pane()) still has focus"
+ );
+ assert!(panel.is_zoomed(window, cx));
+ });
+ }
}
@@ -69,7 +69,6 @@ agent.workspace = true
agent-client-protocol.workspace = true
agent_settings.workspace = true
agent_ui.workspace = true
-agent_ui_v2.workspace = true
anyhow.workspace = true
askpass.workspace = true
assets.workspace = true
@@ -257,7 +256,6 @@ title_bar = { workspace = true, features = ["test-support"] }
workspace = { workspace = true, features = ["test-support"] }
image.workspace = true
agent_ui = { workspace = true, features = ["test-support"] }
-agent_ui_v2 = { workspace = true, features = ["test-support"] }
search = { workspace = true, features = ["test-support"] }
repl = { workspace = true, features = ["test-support"] }
@@ -27,7 +27,7 @@ parts:
stage-packages:
- libasound2t64
# snapcraft has a lint that this is unused, but without it Zed exits with
- # "Missing Vulkan entry points: LibraryLoadFailure" in blade_graphics.
+ # "Missing Vulkan entry points: LibraryLoadFailure" in wgpu.
- libvulkan1
# snapcraft has a lint that this is unused, but without it Zed exits with
# "NoWaylandLib" when run with Wayland.
@@ -634,7 +634,7 @@ fn main() {
false,
cx,
);
- agent_ui_v2::agents_panel::init(cx);
+
repl::init(app_state.fs.clone(), cx);
recent_projects::init(cx);
dev_container::init(cx);
@@ -14,7 +14,6 @@ pub mod visual_tests;
pub(crate) mod windows_only_instance;
use agent_ui::{AgentDiffToolbar, AgentPanelDelegate};
-use agent_ui_v2::agents_panel::AgentsPanel;
use anyhow::Context as _;
pub use app_menus::*;
use assets::Assets;
@@ -87,7 +86,7 @@ use vim_mode_setting::VimModeSetting;
use workspace::notifications::{
NotificationId, SuppressEvent, dismiss_app_notification, show_app_notification,
};
-use workspace::utility_pane::utility_slot_for_dock_position;
+
use workspace::{
AppState, MultiWorkspace, NewFile, NewWindow, OpenLog, Panel, Toast, Workspace,
WorkspaceSettings, create_and_open_local_file,
@@ -657,8 +656,7 @@ fn initialize_panels(
add_panel_when_ready(channels_panel, workspace_handle.clone(), cx.clone()),
add_panel_when_ready(notification_panel, workspace_handle.clone(), cx.clone()),
add_panel_when_ready(debug_panel, workspace_handle.clone(), cx.clone()),
- initialize_agent_panel(workspace_handle.clone(), prompt_builder, cx.clone()).map(|r| r.log_err()),
- initialize_agents_panel(workspace_handle, cx.clone()).map(|r| r.log_err())
+ initialize_agent_panel(workspace_handle, prompt_builder, cx.clone()).map(|r| r.log_err()),
);
anyhow::Ok(())
@@ -748,31 +746,6 @@ async fn initialize_agent_panel(
anyhow::Ok(())
}
-async fn initialize_agents_panel(
- workspace_handle: WeakEntity<Workspace>,
- mut cx: AsyncWindowContext,
-) -> anyhow::Result<()> {
- workspace_handle
- .update_in(&mut cx, |workspace, window, cx| {
- setup_or_teardown_ai_panel(workspace, window, cx, |workspace, cx| {
- AgentsPanel::load(workspace, cx)
- })
- })?
- .await?;
-
- workspace_handle.update_in(&mut cx, |_workspace, window, cx| {
- cx.observe_global_in::<SettingsStore>(window, move |workspace, window, cx| {
- setup_or_teardown_ai_panel(workspace, window, cx, |workspace, cx| {
- AgentsPanel::load(workspace, cx)
- })
- .detach_and_log_err(cx);
- })
- .detach();
- })?;
-
- anyhow::Ok(())
-}
-
fn register_actions(
app_state: Arc<AppState>,
workspace: &mut Workspace,
@@ -1067,18 +1040,6 @@ fn register_actions(
workspace.toggle_panel_focus::<TerminalPanel>(window, cx);
},
)
- .register_action(
- |workspace: &mut Workspace,
- _: &zed_actions::agent::ToggleAgentPane,
- window: &mut Window,
- cx: &mut Context<Workspace>| {
- if let Some(panel) = workspace.panel::<AgentsPanel>(cx) {
- let position = panel.read(cx).position(window, cx);
- let slot = utility_slot_for_dock_position(position);
- workspace.toggle_utility_pane(slot, window, cx);
- }
- },
- )
.register_action({
let app_state = Arc::downgrade(&app_state);
move |_, _: &NewWindow, _, cx| {
@@ -4826,7 +4787,6 @@ mod tests {
"action",
"activity_indicator",
"agent",
- "agents",
"app_menu",
"assistant",
"assistant2",
@@ -5071,7 +5031,7 @@ mod tests {
false,
cx,
);
- agent_ui_v2::agents_panel::init(cx);
+
repl::init(app_state.fs.clone(), cx);
repl::notebook::init(cx);
tasks_ui::init(cx);
@@ -450,8 +450,6 @@ pub mod agent {
AddSelectionToThread,
/// Resets the agent panel zoom levels (agent UI and buffer font sizes).
ResetAgentZoom,
- /// Toggles the utility/agent pane open/closed state.
- ToggleAgentPane,
/// Pastes clipboard content without any formatting.
PasteRaw,
]
@@ -18,6 +18,32 @@ fn estimate_tokens(bytes: usize) -> usize {
bytes / 3
}
+/// The client's preferred edit prediction model. The server may override this.
+#[derive(Clone, Copy, Debug, PartialEq, Eq, Serialize, Deserialize)]
+pub enum EditPredictionModelKind {
+ Zeta1,
+ Zeta2,
+}
+
+/// Pre-computed byte offset ranges within `cursor_excerpt` for different
+/// editable and context token budgets. Allows the server to select the
+/// appropriate ranges for whichever model it uses.
+#[derive(Clone, Debug, Serialize, Deserialize)]
+pub struct ExcerptRanges {
+ /// Editable region computed with a 150-token budget.
+ pub editable_150: Range<usize>,
+ /// Editable region computed with a 180-token budget.
+ pub editable_180: Range<usize>,
+ /// Editable region computed with a 350-token budget.
+ pub editable_350: Range<usize>,
+ /// Context boundary when using editable_150 with 350 tokens of additional context.
+ pub editable_150_context_350: Range<usize>,
+ /// Context boundary when using editable_180 with 350 tokens of additional context.
+ pub editable_180_context_350: Range<usize>,
+ /// Context boundary when using editable_350 with 150 tokens of additional context.
+ pub editable_350_context_150: Range<usize>,
+}
+
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct ZetaPromptInput {
pub cursor_path: Arc<Path>,
@@ -28,6 +54,17 @@ pub struct ZetaPromptInput {
pub excerpt_start_row: Option<u32>,
pub events: Vec<Arc<Event>>,
pub related_files: Vec<RelatedFile>,
+ /// When set, the excerpt was computed with a larger budget (~512 tokens)
+ /// and these ranges let the server select model-appropriate subsets.
+ /// When absent, the excerpt IS the context region and
+ /// `editable_range_in_excerpt` is the only editable range.
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub excerpt_ranges: Option<ExcerptRanges>,
+ /// Client's preferred model. The server may override.
+ #[serde(default, skip_serializing_if = "Option::is_none")]
+ pub preferred_model: Option<EditPredictionModelKind>,
+ #[serde(default)]
+ pub in_open_source_repo: bool,
}
#[derive(
@@ -89,6 +126,25 @@ impl ZetaFormat {
.collect::<Vec<_>>()
.concat()
}
+
+ pub fn special_tokens(&self) -> &'static [&'static str] {
+ match self {
+ ZetaFormat::V0112MiddleAtEnd
+ | ZetaFormat::V0113Ordered
+ | ZetaFormat::V0114180EditableRegion => &[
+ "<|fim_prefix|>",
+ "<|fim_suffix|>",
+ "<|fim_middle|>",
+ "<|file_sep|>",
+ CURSOR_MARKER,
+ ],
+ ZetaFormat::V0120GitMergeMarkers => v0120_git_merge_markers::special_tokens(),
+ ZetaFormat::V0131GitMergeMarkersPrefix | ZetaFormat::V0211Prefill => {
+ v0131_git_merge_markers_prefix::special_tokens()
+ }
+ ZetaFormat::V0211SeedCoder => seed_coder::special_tokens(),
+ }
+ }
}
#[derive(Clone, Debug, Serialize, Deserialize)]
@@ -103,6 +159,17 @@ pub enum Event {
},
}
+impl Event {
+ pub fn in_open_source_repo(&self) -> bool {
+ match self {
+ Event::BufferChange {
+ in_open_source_repo,
+ ..
+ } => *in_open_source_repo,
+ }
+ }
+}
+
pub fn write_event(prompt: &mut String, event: &Event) {
fn write_path_as_unix_str(prompt: &mut String, path: &Path) {
for component in path.components() {
@@ -136,6 +203,8 @@ pub struct RelatedFile {
pub path: Arc<Path>,
pub max_row: u32,
pub excerpts: Vec<RelatedExcerpt>,
+ #[serde(default)]
+ pub in_open_source_repo: bool,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
@@ -144,6 +213,13 @@ pub struct RelatedExcerpt {
pub text: Arc<str>,
}
+pub fn prompt_input_contains_special_tokens(input: &ZetaPromptInput, format: ZetaFormat) -> bool {
+ format
+ .special_tokens()
+ .iter()
+ .any(|token| input.cursor_excerpt.contains(token))
+}
+
pub fn format_zeta_prompt(input: &ZetaPromptInput, format: ZetaFormat) -> String {
format_zeta_prompt_with_budget(input, format, MAX_PROMPT_TOKENS)
}
@@ -164,27 +240,96 @@ pub fn clean_zeta2_model_output(output: &str, format: ZetaFormat) -> &str {
}
}
+fn resolve_cursor_region(
+ input: &ZetaPromptInput,
+ format: ZetaFormat,
+) -> (&str, Range<usize>, usize) {
+ let Some(ranges) = &input.excerpt_ranges else {
+ return (
+ &input.cursor_excerpt,
+ input.editable_range_in_excerpt.clone(),
+ input.cursor_offset_in_excerpt,
+ );
+ };
+
+ let (editable_range, context_range) = match format {
+ ZetaFormat::V0112MiddleAtEnd | ZetaFormat::V0113Ordered => (
+ ranges.editable_150.clone(),
+ ranges.editable_150_context_350.clone(),
+ ),
+ ZetaFormat::V0114180EditableRegion
+ | ZetaFormat::V0120GitMergeMarkers
+ | ZetaFormat::V0131GitMergeMarkersPrefix
+ | ZetaFormat::V0211Prefill
+ | ZetaFormat::V0211SeedCoder => (
+ ranges.editable_180.clone(),
+ ranges.editable_180_context_350.clone(),
+ ),
+ };
+
+ let context_start = context_range.start;
+ let context_text = &input.cursor_excerpt[context_range];
+ let adjusted_editable =
+ (editable_range.start - context_start)..(editable_range.end - context_start);
+ let adjusted_cursor = input.cursor_offset_in_excerpt - context_start;
+
+ (context_text, adjusted_editable, adjusted_cursor)
+}
+
fn format_zeta_prompt_with_budget(
input: &ZetaPromptInput,
format: ZetaFormat,
max_tokens: usize,
) -> String {
+ let (context, editable_range, cursor_offset) = resolve_cursor_region(input, format);
+ let path = &*input.cursor_path;
+
let mut cursor_section = String::new();
match format {
ZetaFormat::V0112MiddleAtEnd => {
- v0112_middle_at_end::write_cursor_excerpt_section(&mut cursor_section, input);
+ v0112_middle_at_end::write_cursor_excerpt_section(
+ &mut cursor_section,
+ path,
+ context,
+ &editable_range,
+ cursor_offset,
+ );
}
ZetaFormat::V0113Ordered | ZetaFormat::V0114180EditableRegion => {
- v0113_ordered::write_cursor_excerpt_section(&mut cursor_section, input)
- }
- ZetaFormat::V0120GitMergeMarkers => {
- v0120_git_merge_markers::write_cursor_excerpt_section(&mut cursor_section, input)
+ v0113_ordered::write_cursor_excerpt_section(
+ &mut cursor_section,
+ path,
+ context,
+ &editable_range,
+ cursor_offset,
+ )
}
+ ZetaFormat::V0120GitMergeMarkers => v0120_git_merge_markers::write_cursor_excerpt_section(
+ &mut cursor_section,
+ path,
+ context,
+ &editable_range,
+ cursor_offset,
+ ),
ZetaFormat::V0131GitMergeMarkersPrefix | ZetaFormat::V0211Prefill => {
- v0131_git_merge_markers_prefix::write_cursor_excerpt_section(&mut cursor_section, input)
+ v0131_git_merge_markers_prefix::write_cursor_excerpt_section(
+ &mut cursor_section,
+ path,
+ context,
+ &editable_range,
+ cursor_offset,
+ )
}
ZetaFormat::V0211SeedCoder => {
- return seed_coder::format_prompt_with_budget(input, max_tokens);
+ return seed_coder::format_prompt_with_budget(
+ path,
+ context,
+ &editable_range,
+ cursor_offset,
+ &input.events,
+ &input.related_files,
+ max_tokens,
+ );
}
}
@@ -343,29 +488,29 @@ pub fn write_related_files(
mod v0112_middle_at_end {
use super::*;
- pub fn write_cursor_excerpt_section(prompt: &mut String, input: &ZetaPromptInput) {
- let path_str = input.cursor_path.to_string_lossy();
+ pub fn write_cursor_excerpt_section(
+ prompt: &mut String,
+ path: &Path,
+ context: &str,
+ editable_range: &Range<usize>,
+ cursor_offset: usize,
+ ) {
+ let path_str = path.to_string_lossy();
write!(prompt, "<|file_sep|>{}\n", path_str).ok();
prompt.push_str("<|fim_prefix|>\n");
- prompt.push_str(&input.cursor_excerpt[..input.editable_range_in_excerpt.start]);
+ prompt.push_str(&context[..editable_range.start]);
prompt.push_str("<|fim_suffix|>\n");
- prompt.push_str(&input.cursor_excerpt[input.editable_range_in_excerpt.end..]);
+ prompt.push_str(&context[editable_range.end..]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
prompt.push_str("<|fim_middle|>current\n");
- prompt.push_str(
- &input.cursor_excerpt
- [input.editable_range_in_excerpt.start..input.cursor_offset_in_excerpt],
- );
+ prompt.push_str(&context[editable_range.start..cursor_offset]);
prompt.push_str(CURSOR_MARKER);
- prompt.push_str(
- &input.cursor_excerpt
- [input.cursor_offset_in_excerpt..input.editable_range_in_excerpt.end],
- );
+ prompt.push_str(&context[cursor_offset..editable_range.end]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
@@ -377,32 +522,32 @@ mod v0112_middle_at_end {
mod v0113_ordered {
use super::*;
- pub fn write_cursor_excerpt_section(prompt: &mut String, input: &ZetaPromptInput) {
- let path_str = input.cursor_path.to_string_lossy();
+ pub fn write_cursor_excerpt_section(
+ prompt: &mut String,
+ path: &Path,
+ context: &str,
+ editable_range: &Range<usize>,
+ cursor_offset: usize,
+ ) {
+ let path_str = path.to_string_lossy();
write!(prompt, "<|file_sep|>{}\n", path_str).ok();
prompt.push_str("<|fim_prefix|>\n");
- prompt.push_str(&input.cursor_excerpt[..input.editable_range_in_excerpt.start]);
+ prompt.push_str(&context[..editable_range.start]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
prompt.push_str("<|fim_middle|>current\n");
- prompt.push_str(
- &input.cursor_excerpt
- [input.editable_range_in_excerpt.start..input.cursor_offset_in_excerpt],
- );
+ prompt.push_str(&context[editable_range.start..cursor_offset]);
prompt.push_str(CURSOR_MARKER);
- prompt.push_str(
- &input.cursor_excerpt
- [input.cursor_offset_in_excerpt..input.editable_range_in_excerpt.end],
- );
+ prompt.push_str(&context[cursor_offset..editable_range.end]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
prompt.push_str("<|fim_suffix|>\n");
- prompt.push_str(&input.cursor_excerpt[input.editable_range_in_excerpt.end..]);
+ prompt.push_str(&context[editable_range.end..]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
@@ -441,30 +586,43 @@ pub mod v0120_git_merge_markers {
pub const SEPARATOR: &str = "=======\n";
pub const END_MARKER: &str = ">>>>>>> UPDATED\n";
- pub fn write_cursor_excerpt_section(prompt: &mut String, input: &ZetaPromptInput) {
- let path_str = input.cursor_path.to_string_lossy();
+ pub fn special_tokens() -> &'static [&'static str] {
+ &[
+ "<|fim_prefix|>",
+ "<|fim_suffix|>",
+ "<|fim_middle|>",
+ "<|file_sep|>",
+ START_MARKER,
+ SEPARATOR,
+ END_MARKER,
+ CURSOR_MARKER,
+ ]
+ }
+
+ pub fn write_cursor_excerpt_section(
+ prompt: &mut String,
+ path: &Path,
+ context: &str,
+ editable_range: &Range<usize>,
+ cursor_offset: usize,
+ ) {
+ let path_str = path.to_string_lossy();
write!(prompt, "<|file_sep|>{}\n", path_str).ok();
prompt.push_str("<|fim_prefix|>");
- prompt.push_str(&input.cursor_excerpt[..input.editable_range_in_excerpt.start]);
+ prompt.push_str(&context[..editable_range.start]);
prompt.push_str("<|fim_suffix|>");
- prompt.push_str(&input.cursor_excerpt[input.editable_range_in_excerpt.end..]);
+ prompt.push_str(&context[editable_range.end..]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
prompt.push_str("<|fim_middle|>");
prompt.push_str(START_MARKER);
- prompt.push_str(
- &input.cursor_excerpt
- [input.editable_range_in_excerpt.start..input.cursor_offset_in_excerpt],
- );
+ prompt.push_str(&context[editable_range.start..cursor_offset]);
prompt.push_str(CURSOR_MARKER);
- prompt.push_str(
- &input.cursor_excerpt
- [input.cursor_offset_in_excerpt..input.editable_range_in_excerpt.end],
- );
+ prompt.push_str(&context[cursor_offset..editable_range.end]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
@@ -502,29 +660,42 @@ pub mod v0131_git_merge_markers_prefix {
pub const SEPARATOR: &str = "=======\n";
pub const END_MARKER: &str = ">>>>>>> UPDATED\n";
- pub fn write_cursor_excerpt_section(prompt: &mut String, input: &ZetaPromptInput) {
- let path_str = input.cursor_path.to_string_lossy();
+ pub fn special_tokens() -> &'static [&'static str] {
+ &[
+ "<|fim_prefix|>",
+ "<|fim_suffix|>",
+ "<|fim_middle|>",
+ "<|file_sep|>",
+ START_MARKER,
+ SEPARATOR,
+ END_MARKER,
+ CURSOR_MARKER,
+ ]
+ }
+
+ pub fn write_cursor_excerpt_section(
+ prompt: &mut String,
+ path: &Path,
+ context: &str,
+ editable_range: &Range<usize>,
+ cursor_offset: usize,
+ ) {
+ let path_str = path.to_string_lossy();
write!(prompt, "<|file_sep|>{}\n", path_str).ok();
prompt.push_str("<|fim_prefix|>");
- prompt.push_str(&input.cursor_excerpt[..input.editable_range_in_excerpt.start]);
+ prompt.push_str(&context[..editable_range.start]);
prompt.push_str(START_MARKER);
- prompt.push_str(
- &input.cursor_excerpt
- [input.editable_range_in_excerpt.start..input.cursor_offset_in_excerpt],
- );
+ prompt.push_str(&context[editable_range.start..cursor_offset]);
prompt.push_str(CURSOR_MARKER);
- prompt.push_str(
- &input.cursor_excerpt
- [input.cursor_offset_in_excerpt..input.editable_range_in_excerpt.end],
- );
+ prompt.push_str(&context[cursor_offset..editable_range.end]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
prompt.push_str(SEPARATOR);
prompt.push_str("<|fim_suffix|>");
- prompt.push_str(&input.cursor_excerpt[input.editable_range_in_excerpt.end..]);
+ prompt.push_str(&context[editable_range.end..]);
if !prompt.ends_with('\n') {
prompt.push('\n');
}
@@ -619,16 +790,38 @@ pub mod seed_coder {
pub const SEPARATOR: &str = "=======\n";
pub const END_MARKER: &str = ">>>>>>> UPDATED\n";
- pub fn format_prompt_with_budget(input: &ZetaPromptInput, max_tokens: usize) -> String {
- let suffix_section = build_suffix_section(input);
- let cursor_prefix_section = build_cursor_prefix_section(input);
+ pub fn special_tokens() -> &'static [&'static str] {
+ &[
+ FIM_SUFFIX,
+ FIM_PREFIX,
+ FIM_MIDDLE,
+ FILE_MARKER,
+ START_MARKER,
+ SEPARATOR,
+ END_MARKER,
+ CURSOR_MARKER,
+ ]
+ }
+
+ pub fn format_prompt_with_budget(
+ path: &Path,
+ context: &str,
+ editable_range: &Range<usize>,
+ cursor_offset: usize,
+ events: &[Arc<Event>],
+ related_files: &[RelatedFile],
+ max_tokens: usize,
+ ) -> String {
+ let suffix_section = build_suffix_section(context, editable_range);
+ let cursor_prefix_section =
+ build_cursor_prefix_section(path, context, editable_range, cursor_offset);
let suffix_tokens = estimate_tokens(suffix_section.len());
let cursor_prefix_tokens = estimate_tokens(cursor_prefix_section.len());
let budget_after_cursor = max_tokens.saturating_sub(suffix_tokens + cursor_prefix_tokens);
let edit_history_section = super::format_edit_history_within_budget(
- &input.events,
+ events,
FILE_MARKER,
"edit_history",
budget_after_cursor,
@@ -637,7 +830,7 @@ pub mod seed_coder {
let budget_after_edit_history = budget_after_cursor.saturating_sub(edit_history_tokens);
let related_files_section = super::format_related_files_within_budget(
- &input.related_files,
+ related_files,
FILE_MARKER,
budget_after_edit_history,
);
@@ -658,32 +851,31 @@ pub mod seed_coder {
prompt
}
- fn build_suffix_section(input: &ZetaPromptInput) -> String {
+ fn build_suffix_section(context: &str, editable_range: &Range<usize>) -> String {
let mut section = String::new();
section.push_str(FIM_SUFFIX);
- section.push_str(&input.cursor_excerpt[input.editable_range_in_excerpt.end..]);
+ section.push_str(&context[editable_range.end..]);
if !section.ends_with('\n') {
section.push('\n');
}
section
}
- fn build_cursor_prefix_section(input: &ZetaPromptInput) -> String {
+ fn build_cursor_prefix_section(
+ path: &Path,
+ context: &str,
+ editable_range: &Range<usize>,
+ cursor_offset: usize,
+ ) -> String {
let mut section = String::new();
- let path_str = input.cursor_path.to_string_lossy();
+ let path_str = path.to_string_lossy();
write!(section, "{}{}\n", FILE_MARKER, path_str).ok();
- section.push_str(&input.cursor_excerpt[..input.editable_range_in_excerpt.start]);
+ section.push_str(&context[..editable_range.start]);
section.push_str(START_MARKER);
- section.push_str(
- &input.cursor_excerpt
- [input.editable_range_in_excerpt.start..input.cursor_offset_in_excerpt],
- );
+ section.push_str(&context[editable_range.start..cursor_offset]);
section.push_str(CURSOR_MARKER);
- section.push_str(
- &input.cursor_excerpt
- [input.cursor_offset_in_excerpt..input.editable_range_in_excerpt.end],
- );
+ section.push_str(&context[cursor_offset..editable_range.end]);
if !section.ends_with('\n') {
section.push('\n');
}
@@ -694,6 +886,9 @@ pub mod seed_coder {
/// The zeta1 prompt format
pub mod zeta1 {
+ use super::*;
+ use std::fmt::Write;
+
pub const CURSOR_MARKER: &str = "<|user_cursor_is_here|>";
pub const START_OF_FILE_MARKER: &str = "<|start_of_file|>";
pub const EDITABLE_REGION_START_MARKER: &str = "<|editable_region_start|>";
@@ -725,6 +920,166 @@ pub mod zeta1 {
prompt.push_str(RESPONSE_HEADER);
prompt
}
+
+ /// Formats a complete zeta1 prompt from a `ZetaPromptInput` using the given
+ /// editable and context byte-offset ranges within `cursor_excerpt`.
+ pub fn format_zeta1_from_input(
+ input: &ZetaPromptInput,
+ editable_range: Range<usize>,
+ context_range: Range<usize>,
+ ) -> String {
+ let events = format_zeta1_events(&input.events);
+ let excerpt = format_zeta1_excerpt(input, editable_range, context_range);
+ format_zeta1_prompt(&events, &excerpt)
+ }
+
+ /// Formats events in zeta1 style (oldest first).
+ fn format_zeta1_events(events: &[Arc<Event>]) -> String {
+ let mut result = String::new();
+ for event in events {
+ let event_string = format_zeta1_event(event);
+ if event_string.is_empty() {
+ continue;
+ }
+ if !result.is_empty() {
+ result.push_str("\n\n");
+ }
+ result.push_str(&event_string);
+ }
+ result
+ }
+
+ fn format_zeta1_event(event: &Event) -> String {
+ match event {
+ Event::BufferChange {
+ path,
+ old_path,
+ diff,
+ ..
+ } => {
+ let mut prompt = String::new();
+ if old_path != path {
+ writeln!(
+ prompt,
+ "User renamed {} to {}\n",
+ old_path.display(),
+ path.display()
+ )
+ .ok();
+ }
+ if !diff.is_empty() {
+ write!(
+ prompt,
+ "User edited {}:\n```diff\n{}\n```",
+ path.display(),
+ diff
+ )
+ .ok();
+ }
+ prompt
+ }
+ }
+ }
+
+ /// Formats the excerpt section of a zeta1 prompt using byte-offset ranges
+ /// within `cursor_excerpt`.
+ fn format_zeta1_excerpt(
+ input: &ZetaPromptInput,
+ editable_range: Range<usize>,
+ context_range: Range<usize>,
+ ) -> String {
+ let path_str = input.cursor_path.to_string_lossy();
+ let excerpt = &*input.cursor_excerpt;
+ let cursor_offset = input.cursor_offset_in_excerpt;
+
+ let mut prompt = String::new();
+ writeln!(&mut prompt, "```{path_str}").ok();
+
+ let starts_at_file_beginning =
+ input.excerpt_start_row == Some(0) && context_range.start == 0;
+ if starts_at_file_beginning {
+ writeln!(&mut prompt, "{START_OF_FILE_MARKER}").ok();
+ }
+
+ prompt.push_str(&excerpt[context_range.start..editable_range.start]);
+
+ writeln!(&mut prompt, "{EDITABLE_REGION_START_MARKER}").ok();
+ prompt.push_str(&excerpt[editable_range.start..cursor_offset]);
+ prompt.push_str(CURSOR_MARKER);
+ prompt.push_str(&excerpt[cursor_offset..editable_range.end]);
+ write!(&mut prompt, "\n{EDITABLE_REGION_END_MARKER}").ok();
+
+ prompt.push_str(&excerpt[editable_range.end..context_range.end]);
+ write!(prompt, "\n```").ok();
+
+ prompt
+ }
+
+ /// Cleans zeta1 model output by extracting content between editable region
+ /// markers and converting the zeta1 cursor marker to the universal one.
+ /// Returns `None` if the output doesn't contain the expected markers.
+ pub fn clean_zeta1_model_output(output: &str) -> Option<String> {
+ let content = output.replace(CURSOR_MARKER, "");
+
+ let content_start = content
+ .find(EDITABLE_REGION_START_MARKER)
+ .map(|pos| pos + EDITABLE_REGION_START_MARKER.len())
+ .map(|pos| {
+ if content.as_bytes().get(pos) == Some(&b'\n') {
+ pos + 1
+ } else {
+ pos
+ }
+ })
+ .unwrap_or(0);
+
+ let content_end = content
+ .find(EDITABLE_REGION_END_MARKER)
+ .map(|pos| {
+ if pos > 0 && content.as_bytes().get(pos - 1) == Some(&b'\n') {
+ pos - 1
+ } else {
+ pos
+ }
+ })
+ .unwrap_or(content.len());
+
+ if content_start > content_end {
+ return Some(String::new());
+ }
+
+ let extracted = &content[content_start..content_end];
+
+ let cursor_offset = output.find(CURSOR_MARKER).map(|zeta1_cursor_pos| {
+ let text_before_cursor = output[..zeta1_cursor_pos].replace(CURSOR_MARKER, "");
+ let text_before_cursor = text_before_cursor
+ .find(EDITABLE_REGION_START_MARKER)
+ .map(|pos| {
+ let after_marker = pos + EDITABLE_REGION_START_MARKER.len();
+ if text_before_cursor.as_bytes().get(after_marker) == Some(&b'\n') {
+ after_marker + 1
+ } else {
+ after_marker
+ }
+ })
+ .unwrap_or(0);
+ let offset_in_extracted = zeta1_cursor_pos
+ .saturating_sub(text_before_cursor)
+ .min(extracted.len());
+ offset_in_extracted
+ });
+
+ let mut result = String::with_capacity(extracted.len() + super::CURSOR_MARKER.len());
+ if let Some(offset) = cursor_offset {
+ result.push_str(&extracted[..offset]);
+ result.push_str(super::CURSOR_MARKER);
+ result.push_str(&extracted[offset..]);
+ } else {
+ result.push_str(extracted);
+ }
+
+ Some(result)
+ }
}
#[cfg(test)]
@@ -747,6 +1102,9 @@ mod tests {
excerpt_start_row: None,
events: events.into_iter().map(Arc::new).collect(),
related_files,
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
}
}
@@ -768,6 +1126,7 @@ mod tests {
row_range: 0..content.lines().count() as u32,
text: content.into(),
}],
+ in_open_source_repo: false,
}
}
@@ -869,6 +1228,7 @@ mod tests {
vec![RelatedFile {
path: Path::new("big.rs").into(),
max_row: 30,
+ in_open_source_repo: false,
excerpts: vec![
RelatedExcerpt {
row_range: 0..10,
@@ -1106,4 +1466,201 @@ mod tests {
"new code\n"
);
}
+
+ #[test]
+ fn test_format_zeta1_from_input_basic() {
+ let excerpt = "fn before() {}\nfn foo() {\n let x = 1;\n}\nfn after() {}\n";
+ let input = ZetaPromptInput {
+ cursor_path: Path::new("src/main.rs").into(),
+ cursor_excerpt: excerpt.into(),
+ editable_range_in_excerpt: 15..41,
+ cursor_offset_in_excerpt: 30,
+ excerpt_start_row: Some(0),
+ events: vec![Arc::new(make_event("other.rs", "-old\n+new\n"))],
+ related_files: vec![],
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
+ };
+
+ let prompt = zeta1::format_zeta1_from_input(&input, 15..41, 0..excerpt.len());
+
+ assert_eq!(
+ prompt,
+ concat!(
+ "### Instruction:\n",
+ "You are a code completion assistant and your task is to analyze user edits and then rewrite an ",
+ "excerpt that the user provides, suggesting the appropriate edits within the excerpt, taking ",
+ "into account the cursor location.\n",
+ "\n",
+ "### User Edits:\n",
+ "\n",
+ "User edited other.rs:\n",
+ "```diff\n",
+ "-old\n",
+ "+new\n",
+ "\n",
+ "```\n",
+ "\n",
+ "### User Excerpt:\n",
+ "\n",
+ "```src/main.rs\n",
+ "<|start_of_file|>\n",
+ "fn before() {}\n",
+ "<|editable_region_start|>\n",
+ "fn foo() {\n",
+ " <|user_cursor_is_here|>let x = 1;\n",
+ "\n",
+ "<|editable_region_end|>}\n",
+ "fn after() {}\n",
+ "\n",
+ "```\n",
+ "\n",
+ "### Response:\n",
+ ),
+ );
+ }
+
+ #[test]
+ fn test_format_zeta1_from_input_no_start_of_file() {
+ let excerpt = "fn foo() {\n let x = 1;\n}\n";
+ let input = ZetaPromptInput {
+ cursor_path: Path::new("src/main.rs").into(),
+ cursor_excerpt: excerpt.into(),
+ editable_range_in_excerpt: 0..28,
+ cursor_offset_in_excerpt: 15,
+ excerpt_start_row: Some(10),
+ events: vec![],
+ related_files: vec![],
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
+ };
+
+ let prompt = zeta1::format_zeta1_from_input(&input, 0..28, 0..28);
+
+ assert_eq!(
+ prompt,
+ concat!(
+ "### Instruction:\n",
+ "You are a code completion assistant and your task is to analyze user edits and then rewrite an ",
+ "excerpt that the user provides, suggesting the appropriate edits within the excerpt, taking ",
+ "into account the cursor location.\n",
+ "\n",
+ "### User Edits:\n",
+ "\n",
+ "\n",
+ "\n",
+ "### User Excerpt:\n",
+ "\n",
+ "```src/main.rs\n",
+ "<|editable_region_start|>\n",
+ "fn foo() {\n",
+ " <|user_cursor_is_here|>let x = 1;\n",
+ "}\n",
+ "\n",
+ "<|editable_region_end|>\n",
+ "```\n",
+ "\n",
+ "### Response:\n",
+ ),
+ );
+ }
+
+ #[test]
+ fn test_format_zeta1_from_input_with_sub_ranges() {
+ let excerpt = "// prefix\nfn foo() {\n let x = 1;\n}\n// suffix\n";
+ let editable_range = 10..37;
+ let context_range = 0..excerpt.len();
+
+ let input = ZetaPromptInput {
+ cursor_path: Path::new("test.rs").into(),
+ cursor_excerpt: excerpt.into(),
+ editable_range_in_excerpt: editable_range.clone(),
+ cursor_offset_in_excerpt: 25,
+ excerpt_start_row: Some(0),
+ events: vec![],
+ related_files: vec![],
+ excerpt_ranges: None,
+ preferred_model: None,
+ in_open_source_repo: false,
+ };
+
+ let prompt = zeta1::format_zeta1_from_input(&input, editable_range, context_range);
+
+ assert_eq!(
+ prompt,
+ concat!(
+ "### Instruction:\n",
+ "You are a code completion assistant and your task is to analyze user edits and then rewrite an ",
+ "excerpt that the user provides, suggesting the appropriate edits within the excerpt, taking ",
+ "into account the cursor location.\n",
+ "\n",
+ "### User Edits:\n",
+ "\n",
+ "\n",
+ "\n",
+ "### User Excerpt:\n",
+ "\n",
+ "```test.rs\n",
+ "<|start_of_file|>\n",
+ "// prefix\n",
+ "<|editable_region_start|>\n",
+ "fn foo() {\n",
+ " <|user_cursor_is_here|>let x = 1;\n",
+ "}\n",
+ "<|editable_region_end|>\n",
+ "// suffix\n",
+ "\n",
+ "```\n",
+ "\n",
+ "### Response:\n",
+ ),
+ );
+ }
+
+ #[test]
+ fn test_clean_zeta1_model_output_basic() {
+ let output = indoc! {"
+ <|editable_region_start|>
+ fn main() {
+ println!(\"hello\");
+ }
+ <|editable_region_end|>
+ "};
+
+ let cleaned = zeta1::clean_zeta1_model_output(output).unwrap();
+ assert_eq!(cleaned, "fn main() {\n println!(\"hello\");\n}");
+ }
+
+ #[test]
+ fn test_clean_zeta1_model_output_with_cursor() {
+ let output = indoc! {"
+ <|editable_region_start|>
+ fn main() {
+ <|user_cursor_is_here|>println!(\"hello\");
+ }
+ <|editable_region_end|>
+ "};
+
+ let cleaned = zeta1::clean_zeta1_model_output(output).unwrap();
+ assert_eq!(
+ cleaned,
+ "fn main() {\n <|user_cursor|>println!(\"hello\");\n}"
+ );
+ }
+
+ #[test]
+ fn test_clean_zeta1_model_output_no_markers() {
+ let output = "fn main() {}\n";
+ let cleaned = zeta1::clean_zeta1_model_output(output).unwrap();
+ assert_eq!(cleaned, "fn main() {}\n");
+ }
+
+ #[test]
+ fn test_clean_zeta1_model_output_empty_region() {
+ let output = "<|editable_region_start|>\n<|editable_region_end|>\n";
+ let cleaned = zeta1::clean_zeta1_model_output(output).unwrap();
+ assert_eq!(cleaned, "");
+ }
}
@@ -38,7 +38,7 @@ const DEFAULT_FILTERS: &[(&str, log::LevelFilter)] = &[
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
("zbus", log::LevelFilter::Warn),
#[cfg(any(target_os = "linux", target_os = "freebsd", target_os = "windows"))]
- ("blade_graphics", log::LevelFilter::Warn),
+ ("wgpu", log::LevelFilter::Warn),
#[cfg(any(target_os = "linux", target_os = "freebsd", target_os = "windows"))]
("naga::back::spv::writer", log::LevelFilter::Warn),
// usvg prints a lot of warnings on rendering an SVG with partial errors, which
@@ -149,6 +149,27 @@ We will support Cross-Region inference for each of the models on a best-effort b
For the most up-to-date supported regions and models, refer to the [Supported Models and Regions for Cross Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html).
+#### Extended Context Window {#bedrock-extended-context}
+
+Anthropic models on Bedrock support a [1M token extended context window](https://docs.anthropic.com/en/docs/build-with-claude/extended-context) beta. To enable this feature, add `"allow_extended_context": true` to your Bedrock configuration:
+
+```json [settings]
+{
+ "language_models": {
+ "bedrock": {
+ "authentication_method": "named_profile",
+ "region": "your-aws-region",
+ "profile": "your-profile-name",
+ "allow_extended_context": true
+ }
+ }
+}
+```
+
+When enabled, Zed will include the `anthropic_beta` field in requests to Bedrock, enabling the 1M token context window for supported Anthropic models such as Claude Sonnet 4.5 and Claude Opus 4.6.
+
+> **Note**: Extended context usage may incur additional API costs. Refer to your AWS Bedrock pricing for details.
+
### Anthropic {#anthropic}
You can use Anthropic models by choosing them via the model dropdown in the Agent Panel.
@@ -402,14 +423,23 @@ models are available.
#### Ollama Context Length {#ollama-context}
-Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models.
-Zed API requests to Ollama include this as the `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of RAM are able to use most models out of the box.
-
-See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
+Zed API requests to Ollama include the context length as the `num_ctx` parameter. By default, Zed uses a context length of `4096` tokens for all Ollama models.
> **Note**: Token counts displayed in the Agent Panel are only estimates and will differ from the model's native tokenizer.
-Depending on your hardware or use-case you may wish to limit or increase the context length for a specific model via settings.json:
+You can set a context length for all Ollama models using the `context_window` setting. This can also be configured in the Ollama provider settings UI:
+
+```json [settings]
+{
+ "language_models": {
+ "ollama": {
+ "context_window": 8192
+ }
+ }
+}
+```
+
+Alternatively, you can configure the context length per-model using the `max_tokens` field in `available_models`:
```json [settings]
{
@@ -431,6 +461,8 @@ Depending on your hardware or use-case you may wish to limit or increase the con
}
```
+> **Note**: If `context_window` is set, it overrides any per-model `max_tokens` values.
+
If you specify a context length that is too large for your hardware, Ollama will log an error.
You can watch these logs by running: `tail -f ~/.ollama/logs/ollama.log` (macOS) or `journalctl -u ollama -f` (Linux).
Depending on the memory available on your machine, you may need to adjust the context length to a smaller value.
@@ -162,6 +162,8 @@ You can pass Ruby LSP configuration to `initialization_options`, e.g.
}
```
+For full configuration options, see the [Ruby LSP website](https://shopify.github.io/ruby-lsp/editors.html).
+
LSP `settings` and `initialization_options` can also be project-specific. For example to use [standardrb/standard](https://github.com/standardrb/standard) as a formatter and linter for a particular project, add this to a `.zed/settings.json` inside your project repo:
```json [settings]
@@ -160,8 +160,6 @@ On some systems the file `/etc/prime-discrete` can be used to enforce the use of
On others, you may be able to the environment variable `DRI_PRIME=1` when running Zed to force the use of the discrete GPU.
-If you're using an AMD GPU and Zed crashes when selecting long lines, try setting the `ZED_PATH_SAMPLE_COUNT=0` environment variable. (See [#26143](https://github.com/zed-industries/zed/issues/26143))
-
If you're using an AMD GPU, you might get a 'Broken Pipe' error. Try using the RADV or Mesa drivers. (See [#13880](https://github.com/zed-industries/zed/issues/13880))
If you are using `amdvlk`, the default open-source AMD graphics driver, you may find that Zed consistently fails to launch. This is a known issue for some users, for example on Omarchy (see issue [#28851](https://github.com/zed-industries/zed/issues/28851)). To fix this, you will need to use a different driver. We recommend removing the `amdvlk` and `lib32-amdvlk` packages and installing `vulkan-radeon` instead (see issue [#14141](https://github.com/zed-industries/zed/issues/14141)).
@@ -216,7 +214,7 @@ Additionally, it is extremely beneficial to provide the contents of your Zed log
```sh
truncate -s 0 ~/.local/share/zed/logs/Zed.log # Clear the log file
-ZED_LOG=blade_graphics=info zed .
+ZED_LOG=wgpu=info zed .
cat ~/.local/share/zed/logs/Zed.log
# copy the output
```
@@ -224,7 +222,7 @@ cat ~/.local/share/zed/logs/Zed.log
Or, if you have the Zed cli setup, you can do
```sh
-ZED_LOG=blade_graphics=info /path/to/zed/cli --foreground .
+ZED_LOG=wgpu=info /path/to/zed/cli --foreground .
# copy the output
```
@@ -384,7 +382,7 @@ Replace `192` with your desired DPI value. This affects the system globally and
### Font rendering parameters
-When using Blade rendering (Linux platforms and self-compiled builds with the Blade renderer enabled), Zed reads `ZED_FONTS_GAMMA` and `ZED_FONTS_GRAYSCALE_ENHANCED_CONTRAST` environment variables for the values to use for font rendering.
+On Linux, Zed reads `ZED_FONTS_GAMMA` and `ZED_FONTS_GRAYSCALE_ENHANCED_CONTRAST` environment variables for the values to use for font rendering.
`ZED_FONTS_GAMMA` corresponds to [getgamma](https://learn.microsoft.com/en-us/windows/win32/api/dwrite/nf-dwrite-idwriterenderingparams-getgamma) values.
Allowed range [1.0, 2.2], other values are clipped.
@@ -18,6 +18,7 @@ The snippets are located in `~/.config/zed/snippets` directory to which you can
// Use placeholders like $1, $2 or ${1:defaultValue} to define tab stops.
// The $0 determines the final cursor position.
// Placeholders with the same value are linked.
+ // If the snippet contains the $ symbol outside of a placeholder, it must be escaped with two slashes (e.g. \\$var).
"Log to console": {
"prefix": "log",
"body": ["console.info(\"Hello, ${1:World}!\")", "$0"],
@@ -0,0 +1,357 @@
+#!/usr/bin/env python3
+"""Fetch a crash report from Sentry and output formatted markdown.
+
+Usage:
+ script/sentry-fetch <issue-short-id-or-numeric-id>
+ script/sentry-fetch ZED-4VS
+ script/sentry-fetch 7243282041
+
+Authentication (checked in order):
+ 1. SENTRY_AUTH_TOKEN environment variable
+ 2. Token from ~/.sentryclirc (written by `sentry-cli login`)
+
+If neither is found, the script will print setup instructions and exit.
+"""
+
+import argparse
+import configparser
+import json
+import os
+import sys
+import urllib.error
+import urllib.request
+
+SENTRY_BASE_URL = "https://sentry.io/api/0"
+DEFAULT_SENTRY_ORG = "zed-dev"
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Fetch a crash report from Sentry and output formatted markdown."
+ )
+ parser.add_argument(
+ "issue",
+ help="Sentry issue short ID (e.g. ZED-4VS) or numeric issue ID",
+ )
+ args = parser.parse_args()
+
+ token = find_auth_token()
+ if not token:
+ print(
+ "Error: No Sentry auth token found.",
+ file=sys.stderr,
+ )
+ print(
+ "\nSet up authentication using one of these methods:\n"
+ " 1. Run `sentry-cli login` (stores token in ~/.sentryclirc)\n"
+ " 2. Set the SENTRY_AUTH_TOKEN environment variable\n"
+ "\nGet a token at https://sentry.io/settings/auth-tokens/",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ try:
+ issue_id, short_id, issue = resolve_issue(args.issue, token)
+ event = fetch_latest_event(issue_id, token)
+ except FetchError as err:
+ print(f"Error: {err}", file=sys.stderr)
+ sys.exit(1)
+
+ markdown = format_crash_report(issue, event, short_id)
+ print(markdown)
+
+
+class FetchError(Exception):
+ pass
+
+
+def find_auth_token():
+ """Find a Sentry auth token from environment or ~/.sentryclirc.
+
+ Checks in order:
+ 1. SENTRY_AUTH_TOKEN environment variable
+ 2. auth.token in ~/.sentryclirc (INI format, written by `sentry-cli login`)
+ """
+ token = os.environ.get("SENTRY_AUTH_TOKEN")
+ if token:
+ return token
+
+ sentryclirc_path = os.path.expanduser("~/.sentryclirc")
+ if os.path.isfile(sentryclirc_path):
+ config = configparser.ConfigParser()
+ try:
+ config.read(sentryclirc_path)
+ token = config.get("auth", "token", fallback=None)
+ if token:
+ return token
+ except configparser.Error:
+ pass
+
+ return None
+
+
+def api_get(path, token):
+ """Make an authenticated GET request to the Sentry API."""
+ url = f"{SENTRY_BASE_URL}{path}"
+ req = urllib.request.Request(url)
+ req.add_header("Authorization", f"Bearer {token}")
+ req.add_header("Accept", "application/json")
+ try:
+ with urllib.request.urlopen(req) as response:
+ return json.loads(response.read().decode("utf-8"))
+ except urllib.error.HTTPError as err:
+ body = err.read().decode("utf-8", errors="replace")
+ try:
+ detail = json.loads(body).get("detail", body)
+ except (json.JSONDecodeError, AttributeError):
+ detail = body
+ raise FetchError(f"Sentry API returned HTTP {err.code} for {path}: {detail}")
+ except urllib.error.URLError as err:
+ raise FetchError(f"Failed to connect to Sentry API: {err.reason}")
+
+
+def resolve_issue(identifier, token):
+ """Resolve a Sentry issue by short ID or numeric ID.
+
+ Returns (issue_id, short_id, issue_data).
+ """
+ if identifier.isdigit():
+ issue = api_get(f"/issues/{identifier}/", token)
+ return identifier, issue.get("shortId", identifier), issue
+
+ result = api_get(f"/organizations/{DEFAULT_SENTRY_ORG}/shortids/{identifier}/", token)
+ group_id = str(result["groupId"])
+ issue = api_get(f"/issues/{group_id}/", token)
+ return group_id, identifier, issue
+
+
+def fetch_latest_event(issue_id, token):
+ """Fetch the latest event for an issue."""
+ return api_get(f"/issues/{issue_id}/events/latest/", token)
+
+
+def format_crash_report(issue, event, short_id):
+ """Format a Sentry issue and event as a markdown crash report."""
+ lines = []
+
+ title = issue.get("title", "Unknown Crash")
+ lines.append(f"# {title}")
+ lines.append("")
+
+ issue_id = issue.get("id", "unknown")
+ project = issue.get("project", {})
+ project_slug = (
+ project.get("slug", "unknown") if isinstance(project, dict) else str(project)
+ )
+ first_seen = issue.get("firstSeen", "unknown")
+ last_seen = issue.get("lastSeen", "unknown")
+ count = issue.get("count", "unknown")
+ sentry_url = f"https://sentry.io/organizations/{DEFAULT_SENTRY_ORG}/issues/{issue_id}/"
+
+ lines.append(f"**Short ID:** {short_id}")
+ lines.append(f"**Issue ID:** {issue_id}")
+ lines.append(f"**Project:** {project_slug}")
+ lines.append(f"**Sentry URL:** {sentry_url}")
+ lines.append(f"**First Seen:** {first_seen}")
+ lines.append(f"**Last Seen:** {last_seen}")
+ lines.append(f"**Event Count:** {count}")
+ lines.append("")
+
+ format_tags(lines, event)
+ format_entries(lines, event)
+
+ return "\n".join(lines)
+
+
+def format_tags(lines, event):
+ """Extract and format tags from the event."""
+ tags = event.get("tags", [])
+ if not tags:
+ return
+
+ lines.append("## Tags")
+ lines.append("")
+ for tag in tags:
+ key = tag.get("key", "") if isinstance(tag, dict) else ""
+ value = tag.get("value", "") if isinstance(tag, dict) else ""
+ if key:
+ lines.append(f"- **{key}:** {value}")
+ lines.append("")
+
+
+def format_entries(lines, event):
+ """Format exception and thread entries from the event."""
+ entries = event.get("entries", [])
+
+ for entry in entries:
+ entry_type = entry.get("type", "")
+
+ if entry_type == "exception":
+ format_exceptions(lines, entry)
+ elif entry_type == "threads":
+ format_threads(lines, entry)
+
+
+def format_exceptions(lines, entry):
+ """Format exception entries."""
+ exceptions = entry.get("data", {}).get("values", [])
+ if not exceptions:
+ return
+
+ lines.append("## Exceptions")
+ lines.append("")
+
+ for i, exc in enumerate(exceptions):
+ exc_type = exc.get("type", "Unknown")
+ exc_value = exc.get("value", "")
+ mechanism = exc.get("mechanism", {})
+
+ lines.append(f"### Exception {i + 1}")
+ lines.append(f"**Type:** {exc_type}")
+ if exc_value:
+ lines.append(f"**Value:** {exc_value}")
+ if mechanism:
+ mech_type = mechanism.get("type", "unknown")
+ handled = mechanism.get("handled")
+ if handled is not None:
+ lines.append(f"**Mechanism:** {mech_type} (handled: {handled})")
+ else:
+ lines.append(f"**Mechanism:** {mech_type}")
+ lines.append("")
+
+ stacktrace = exc.get("stacktrace")
+ if stacktrace:
+ frames = stacktrace.get("frames", [])
+ lines.append("#### Stacktrace")
+ lines.append("")
+ lines.append("```")
+ lines.append(format_frames(frames))
+ lines.append("```")
+ lines.append("")
+
+
+def format_threads(lines, entry):
+ """Format thread entries, focusing on crashed and current threads."""
+ threads = entry.get("data", {}).get("values", [])
+ if not threads:
+ return
+
+ crashed_threads = [t for t in threads if t.get("crashed", False)]
+ current_threads = [
+ t for t in threads if t.get("current", False) and not t.get("crashed", False)
+ ]
+ other_threads = [
+ t
+ for t in threads
+ if not t.get("crashed", False) and not t.get("current", False)
+ ]
+
+ lines.append("## Threads")
+ lines.append("")
+
+ for thread in crashed_threads + current_threads:
+ format_single_thread(lines, thread, show_frames=True)
+
+ if other_threads:
+ lines.append(f"*({len(other_threads)} other threads omitted)*")
+ lines.append("")
+
+
+def format_single_thread(lines, thread, show_frames=False):
+ """Format a single thread entry."""
+ thread_id = thread.get("id", "?")
+ thread_name = thread.get("name", "unnamed")
+ crashed = thread.get("crashed", False)
+ current = thread.get("current", False)
+
+ markers = []
+ if crashed:
+ markers.append("CRASHED")
+ if current:
+ markers.append("current")
+ marker_str = f" ({', '.join(markers)})" if markers else ""
+
+ lines.append(f"### Thread {thread_id}: {thread_name}{marker_str}")
+ lines.append("")
+
+ if not show_frames:
+ return
+
+ stacktrace = thread.get("stacktrace")
+ if not stacktrace:
+ return
+
+ frames = stacktrace.get("frames", [])
+ if frames:
+ lines.append("```")
+ lines.append(format_frames(frames))
+ lines.append("```")
+ lines.append("")
+
+
+def format_frames(frames):
+ """Format stack trace frames for display.
+
+ Sentry provides frames from outermost caller to innermost callee,
+ so we reverse them to show the most recent (crashing) call first,
+ matching the convention used in most crash report displays.
+ """
+ output_lines = []
+
+ for frame in reversed(frames):
+ func = frame.get("function") or frame.get("symbol") or "unknown"
+ filename = (
+ frame.get("filename")
+ or frame.get("absPath")
+ or frame.get("abs_path")
+ or "unknown file"
+ )
+ line_no = frame.get("lineNo") or frame.get("lineno")
+ in_app = frame.get("inApp", frame.get("in_app", False))
+
+ app_marker = "(In app)" if in_app else "(Not in app)"
+ line_info = f"Line {line_no}" if line_no else "Line null"
+
+ output_lines.append(f" {func} in {filename} [{line_info}] {app_marker}")
+
+ context_lines = build_context_lines(frame, line_no)
+ output_lines.extend(context_lines)
+
+ return "\n".join(output_lines)
+
+
+def build_context_lines(frame, suspect_line_no):
+ """Build context code lines for a single frame.
+
+ Handles both Sentry response formats:
+ - preContext/contextLine/postContext (separate fields)
+ - context as an array of [line_no, code] tuples
+ """
+ output = []
+
+ pre_context = frame.get("preContext") or frame.get("pre_context") or []
+ context_line = frame.get("contextLine") or frame.get("context_line")
+ post_context = frame.get("postContext") or frame.get("post_context") or []
+
+ if context_line is not None or pre_context or post_context:
+ for code_line in pre_context:
+ output.append(f" {code_line}")
+ if context_line is not None:
+ output.append(f" {context_line} <-- SUSPECT LINE")
+ for code_line in post_context:
+ output.append(f" {code_line}")
+ return output
+
+ context = frame.get("context") or []
+ for ctx_entry in context:
+ if isinstance(ctx_entry, list) and len(ctx_entry) >= 2:
+ ctx_line_no = ctx_entry[0]
+ ctx_code = ctx_entry[1]
+ suspect = " <-- SUSPECT LINE" if ctx_line_no == suspect_line_no else ""
+ output.append(f" {ctx_code}{suspect}")
+
+ return output
+
+
+if __name__ == "__main__":
+ main()
@@ -140,9 +140,9 @@ fn cron_unit_evals_job() -> Job {
.add_step(steps::setup_cargo_config(Platform::Linux))
.add_step(steps::cache_rust_dependencies_namespace())
.map(steps::install_linux_dependencies)
- .add_step(steps::setup_sccache(Platform::Linux))
.add_step(steps::cargo_install_nextest())
.add_step(steps::clear_target_dir_if_large(Platform::Linux))
+ .add_step(steps::setup_sccache(Platform::Linux))
.add_step(script_step)
.add_step(steps::show_sccache_stats(Platform::Linux))
.add_step(steps::cleanup_cargo_config(Platform::Linux))
@@ -157,9 +157,9 @@ fn unit_evals(commit: Option<&WorkflowInput>) -> Job {
.add_step(steps::setup_cargo_config(Platform::Linux))
.add_step(steps::cache_rust_dependencies_namespace())
.map(steps::install_linux_dependencies)
- .add_step(steps::setup_sccache(Platform::Linux))
.add_step(steps::cargo_install_nextest())
.add_step(steps::clear_target_dir_if_large(Platform::Linux))
+ .add_step(steps::setup_sccache(Platform::Linux))
.add_step(match commit {
Some(commit) => script_step.add_env(("UNIT_EVAL_COMMIT", commit)),
None => script_step,
@@ -421,13 +421,13 @@ fn run_platform_tests_impl(platform: Platform, filter_packages: bool) -> NamedJo
platform == Platform::Linux,
steps::install_linux_dependencies,
)
- .add_step(steps::setup_sccache(platform))
.add_step(steps::setup_node())
.when(
platform == Platform::Linux || platform == Platform::Mac,
|job| job.add_step(steps::cargo_install_nextest()),
)
.add_step(steps::clear_target_dir_if_large(platform))
+ .add_step(steps::setup_sccache(platform))
.when(filter_packages, |job| {
job.add_step(
steps::cargo_nextest(platform).with_changed_packages_filter("orchestrate"),