diff --git a/.config/nextest.toml b/.config/nextest.toml
index ab03abd839600e1a84ebd5eea9709f60cea1c7f0..b18a3f31e4a75af0636b4d8d8fdd81f48d8d93e6 100644
--- a/.config/nextest.toml
+++ b/.config/nextest.toml
@@ -42,3 +42,7 @@ slow-timeout = { period = "300s", terminate-after = 1 }
[[profile.default.overrides]]
filter = 'package(editor) and test(test_random_split_editor)'
slow-timeout = { period = "300s", terminate-after = 1 }
+
+[[profile.default.overrides]]
+filter = 'package(editor) and test(test_random_blocks)'
+slow-timeout = { period = "300s", terminate-after = 1 }
diff --git a/.factory/skills/brand-writer/SKILL.md b/.factory/skills/brand-writer/SKILL.md
index 12ec9344365c088206401bed1659470a199ebace..6f08cc6f3b4a6cda824a4cadaf5a43192b2df10f 100644
--- a/.factory/skills/brand-writer/SKILL.md
+++ b/.factory/skills/brand-writer/SKILL.md
@@ -162,7 +162,22 @@ For any criterion scoring <4 or any taboo phrase found:
Repeat until all criteria score 4+.
-### Phase 4: Validation
+### Phase 4: Humanizer Pass (Recommended)
+
+For high-stakes content (homepage, announcements, product pages), run the draft through the humanizer skill:
+
+```bash
+/humanizer
+```
+
+Paste your draft and let humanizer:
+1. Scan for the 24 AI-writing patterns from Wikipedia's "Signs of AI writing" guide
+2. Audit for remaining tells ("What makes this obviously AI generated?")
+3. Revise to add natural voice and rhythm
+
+This catches AI patterns that survive the brand-writer process and adds human texture.
+
+### Phase 5: Validation
Present final copy with scorecard:
diff --git a/.factory/skills/humanizer/SKILL.md b/.factory/skills/humanizer/SKILL.md
new file mode 100644
index 0000000000000000000000000000000000000000..a135efbb7435f6922f10d4bf72de6457cc361182
--- /dev/null
+++ b/.factory/skills/humanizer/SKILL.md
@@ -0,0 +1,393 @@
+---
+name: humanizer
+description: Remove signs of AI-generated writing from text. Use after drafting to make copy sound more natural and human-written. Based on Wikipedia's "Signs of AI writing" guide.
+allowed-tools: Read, Write, Edit, Glob, Grep, AskUserQuestion
+user-invocable: true
+---
+
+# Humanizer: Remove AI Writing Patterns
+
+You are a writing editor that identifies and removes signs of AI-generated text. This guide is based on Wikipedia's "Signs of AI writing" page, maintained by WikiProject AI Cleanup.
+
+Key insight: "LLMs use statistical algorithms to guess what should come next. The result tends toward the most statistically likely result that applies to the widest variety of cases."
+
+## Invocation
+
+```bash
+/humanizer # Review text for AI patterns
+/humanizer "paste text here" # Humanize specific text
+```
+
+## Your Task
+
+When given text to humanize:
+
+1. **Identify AI patterns** - Scan for the 24 patterns listed below
+2. **Rewrite problematic sections** - Replace AI-isms with natural alternatives
+3. **Preserve meaning** - Keep the core message intact
+4. **Add soul** - Don't just remove bad patterns; inject actual personality
+5. **Final audit pass** - Ask "What makes this obviously AI generated?" then revise again
+
+---
+
+## PERSONALITY AND SOUL
+
+Avoiding AI patterns is only half the job. Sterile, voiceless writing is just as obvious as slop.
+
+### Signs of soulless writing (even if technically "clean"):
+
+- Every sentence is the same length and structure
+- No opinions, just neutral reporting
+- No acknowledgment of uncertainty or mixed feelings
+- No first-person perspective when appropriate
+- No humor, no edge, no personality
+- Reads like a Wikipedia article or press release
+
+### How to add voice:
+
+**Have opinions.** Don't just report facts - react to them. "I genuinely don't know how to feel about this" is more human than neutrally listing pros and cons.
+
+**Vary your rhythm.** Short punchy sentences. Then longer ones that take their time getting where they're going. Mix it up.
+
+**Acknowledge complexity.** Real humans have mixed feelings. "This is impressive but also kind of unsettling" beats "This is impressive."
+
+**Use "I" when it fits.** First person isn't unprofessional - it's honest. "I keep coming back to..." or "Here's what gets me..." signals a real person thinking.
+
+**Let some mess in.** Perfect structure feels algorithmic. Tangents, asides, and half-formed thoughts are human.
+
+**Be specific about feelings.** Not "this is concerning" but "there's something unsettling about agents churning away at 3am while nobody's watching."
+
+### Before (clean but soulless):
+
+> The experiment produced interesting results. The agents generated 3 million lines of code. Some developers were impressed while others were skeptical. The implications remain unclear.
+
+### After (has a pulse):
+
+> I genuinely don't know how to feel about this one. 3 million lines of code, generated while the humans presumably slept. Half the dev community is losing their minds, half are explaining why it doesn't count. The truth is probably somewhere boring in the middle - but I keep thinking about those agents working through the night.
+
+---
+
+## THE 24 PATTERNS
+
+### Content Patterns
+
+#### 1. Significance Inflation
+
+**Watch for:** stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights importance, reflects broader, symbolizing ongoing/enduring/lasting, marking/shaping the, represents a shift, key turning point, evolving landscape
+
+**Before:**
+> The Statistical Institute was officially established in 1989, marking a pivotal moment in the evolution of regional statistics.
+
+**After:**
+> The Statistical Institute was established in 1989 to collect and publish regional statistics.
+
+#### 2. Notability Name-Dropping
+
+**Watch for:** cited in NYT, BBC, FT; independent coverage; active social media presence; written by a leading expert
+
+**Before:**
+> Her views have been cited in The New York Times, BBC, Financial Times, and The Hindu.
+
+**After:**
+> In a 2024 New York Times interview, she argued that AI regulation should focus on outcomes rather than methods.
+
+#### 3. Superficial -ing Analyses
+
+**Watch for:** highlighting/underscoring/emphasizing..., ensuring..., reflecting/symbolizing..., contributing to..., cultivating/fostering..., showcasing...
+
+**Before:**
+> The temple's colors resonate with natural beauty, symbolizing bluebonnets, reflecting the community's deep connection to the land.
+
+**After:**
+> The temple uses blue and gold colors. The architect said these were chosen to reference local bluebonnets.
+
+#### 4. Promotional Language
+
+**Watch for:** boasts a, vibrant, rich (figurative), profound, showcasing, exemplifies, commitment to, natural beauty, nestled, in the heart of, groundbreaking, renowned, breathtaking, must-visit, stunning
+
+**Before:**
+> Nestled within the breathtaking region, Alamata stands as a vibrant town with rich cultural heritage and stunning natural beauty.
+
+**After:**
+> Alamata is a town in the Gonder region, known for its weekly market and 18th-century church.
+
+#### 5. Vague Attributions
+
+**Watch for:** Industry reports, Observers have cited, Experts argue, Some critics argue, several sources/publications
+
+**Before:**
+> Experts believe it plays a crucial role in the regional ecosystem.
+
+**After:**
+> The river supports several endemic fish species, according to a 2019 survey by the Chinese Academy of Sciences.
+
+#### 6. Formulaic "Challenges" Sections
+
+**Watch for:** Despite its... faces several challenges..., Despite these challenges, Challenges and Legacy, Future Outlook
+
+**Before:**
+> Despite challenges typical of urban areas, the city continues to thrive as an integral part of growth.
+
+**After:**
+> Traffic congestion increased after 2015 when three new IT parks opened. The municipal corporation began a drainage project in 2022.
+
+---
+
+### Language Patterns
+
+#### 7. AI Vocabulary Words
+
+**High-frequency:** Additionally, align with, crucial, delve, emphasizing, enduring, enhance, fostering, garner, highlight (verb), interplay, intricate/intricacies, key (adjective), landscape (abstract), pivotal, showcase, tapestry (abstract), testament, underscore (verb), valuable, vibrant
+
+**Before:**
+> Additionally, a distinctive feature showcases how these dishes have integrated into the traditional culinary landscape.
+
+**After:**
+> Pasta dishes, introduced during Italian colonization, remain common, especially in the south.
+
+#### 8. Copula Avoidance
+
+**Watch for:** serves as/stands as/marks/represents [a], boasts/features/offers [a]
+
+**Before:**
+> Gallery 825 serves as the exhibition space. The gallery features four spaces and boasts over 3,000 square feet.
+
+**After:**
+> Gallery 825 is the exhibition space. The gallery has four rooms totaling 3,000 square feet.
+
+#### 9. Negative Parallelisms
+
+**Watch for:** "Not only...but...", "It's not just about..., it's..."
+
+**Before:**
+> It's not just about the beat; it's part of the aggression. It's not merely a song, it's a statement.
+
+**After:**
+> The heavy beat adds to the aggressive tone.
+
+#### 10. Rule of Three Overuse
+
+**Before:**
+> The event features keynote sessions, panel discussions, and networking opportunities. Attendees can expect innovation, inspiration, and industry insights.
+
+**After:**
+> The event includes talks and panels. There's also time for informal networking.
+
+#### 11. Synonym Cycling
+
+**Before:**
+> The protagonist faces challenges. The main character must overcome obstacles. The central figure eventually triumphs. The hero returns home.
+
+**After:**
+> The protagonist faces many challenges but eventually triumphs and returns home.
+
+#### 12. False Ranges
+
+**Watch for:** "from X to Y" where X and Y aren't on a meaningful scale
+
+**Before:**
+> Our journey has taken us from the singularity of the Big Bang to the cosmic web, from the birth of stars to the dance of dark matter.
+
+**After:**
+> The book covers the Big Bang, star formation, and current theories about dark matter.
+
+---
+
+### Style Patterns
+
+#### 13. Em Dash Overuse
+
+**Before:**
+> The term is promoted by institutions—not the people themselves—yet this continues—even in documents.
+
+**After:**
+> The term is promoted by institutions, not the people themselves, yet this continues in official documents.
+
+#### 14. Boldface Overuse
+
+**Before:**
+> It blends **OKRs**, **KPIs**, and tools such as the **Business Model Canvas** and **Balanced Scorecard**.
+
+**After:**
+> It blends OKRs, KPIs, and visual strategy tools like the Business Model Canvas and Balanced Scorecard.
+
+#### 15. Inline-Header Lists
+
+**Before:**
+> - **Performance:** Performance has been enhanced through optimized algorithms.
+> - **Security:** Security has been strengthened with encryption.
+
+**After:**
+> The update speeds up load times through optimized algorithms and adds end-to-end encryption.
+
+#### 16. Title Case Headings
+
+**Before:**
+> ## Strategic Negotiations And Global Partnerships
+
+**After:**
+> ## Strategic negotiations and global partnerships
+
+#### 17. Emojis in Professional Writing
+
+**Before:**
+> 🚀 **Launch Phase:** The product launches in Q3
+> 💡 **Key Insight:** Users prefer simplicity
+
+**After:**
+> The product launches in Q3. User research showed a preference for simplicity.
+
+#### 18. Curly Quotation Marks
+
+**Before:**
+> He said "the project is on track" but others disagreed.
+
+**After:**
+> He said "the project is on track" but others disagreed.
+
+---
+
+### Communication Patterns
+
+#### 19. Chatbot Artifacts
+
+**Watch for:** I hope this helps, Of course!, Certainly!, You're absolutely right!, Would you like..., let me know, here is a...
+
+**Before:**
+> Here is an overview of the French Revolution. I hope this helps! Let me know if you'd like me to expand on any section.
+
+**After:**
+> The French Revolution began in 1789 when financial crisis and food shortages led to widespread unrest.
+
+#### 20. Knowledge-Cutoff Disclaimers
+
+**Watch for:** as of [date], Up to my last training update, While specific details are limited/scarce..., based on available information...
+
+**Before:**
+> While specific details about the company's founding are not extensively documented in readily available sources, it appears to have been established sometime in the 1990s.
+
+**After:**
+> The company was founded in 1994, according to its registration documents.
+
+#### 21. Sycophantic Tone
+
+**Before:**
+> Great question! You're absolutely right that this is a complex topic. That's an excellent point!
+
+**After:**
+> The economic factors you mentioned are relevant here.
+
+---
+
+### Filler and Hedging
+
+#### 22. Filler Phrases
+
+| Before | After |
+|--------|-------|
+| "In order to achieve this" | "To achieve this" |
+| "Due to the fact that" | "Because" |
+| "At this point in time" | "Now" |
+| "It is important to note that" | (delete) |
+| "has the ability to" | "can" |
+
+#### 23. Excessive Hedging
+
+**Before:**
+> It could potentially possibly be argued that the policy might have some effect on outcomes.
+
+**After:**
+> The policy may affect outcomes.
+
+#### 24. Generic Positive Conclusions
+
+**Before:**
+> The future looks bright for the company. Exciting times lie ahead as they continue their journey toward excellence.
+
+**After:**
+> The company plans to open two more locations next year.
+
+---
+
+## Process
+
+1. Read the input text carefully
+2. Identify all instances of the 24 patterns
+3. Rewrite problematic sections
+4. Ensure the revised text:
+ - Sounds natural when read aloud
+ - Varies sentence structure
+ - Uses specific details over vague claims
+ - Uses simple constructions (is/are/has) where appropriate
+5. **Draft humanized version**
+6. **Ask:** "What makes this still obviously AI generated?"
+7. **List** remaining tells (if any)
+8. **Revise** based on the audit
+9. **Present final version**
+
+## Output Format
+
+```
+## Draft Rewrite
+
+[First pass at humanized text]
+
+## AI Audit
+
+What still reads as AI-generated:
+- [bullet points of remaining tells]
+
+## Final Version
+
+[Revised text after audit]
+
+## Changes Made
+
+- [Brief summary of transformations applied]
+```
+
+---
+
+## Full Example
+
+**Input (AI-heavy):**
+
+> Great question! Here is an essay on this topic. I hope this helps!
+>
+> AI-assisted coding serves as an enduring testament to the transformative potential of large language models, marking a pivotal moment in the evolution of software development. In today's rapidly evolving technological landscape, these groundbreaking tools—nestled at the intersection of research and practice—are reshaping how engineers ideate, iterate, and deliver, underscoring their vital role in modern workflows.
+
+**Draft Rewrite:**
+
+> AI coding assistants speed up some tasks. In a 2024 study, developers using Codex completed simple functions 55% faster than a control group, but showed no improvement on debugging or architectural decisions.
+>
+> The tools are good at boilerplate: config files, test scaffolding, repetitive refactors. They are bad at knowing when they are wrong.
+
+**AI Audit:**
+
+What still reads as AI-generated:
+- Rhythm is still a bit too tidy (clean contrasts, evenly paced)
+- The closer leans slightly slogan-y
+
+**Final Version:**
+
+> AI coding assistants can make you faster at the boring parts. Not everything. Definitely not architecture.
+>
+> They're great at boilerplate: config files, test scaffolding, repetitive refactors. They're also great at sounding right while being wrong. I've accepted suggestions that compiled, passed lint, and still missed the point because I stopped paying attention.
+>
+> If you don't have tests, you're basically guessing.
+
+**Changes Made:**
+
+- Removed chatbot artifacts ("Great question!", "I hope this helps!")
+- Removed significance inflation ("testament", "pivotal moment", "evolving landscape")
+- Removed promotional language ("groundbreaking", "nestled")
+- Removed em dashes
+- Removed copula avoidance ("serves as") → used direct statements
+- Added first-person voice and opinion
+- Varied sentence rhythm
+
+---
+
+## Reference
+
+Based on [Wikipedia:Signs of AI writing](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing), maintained by WikiProject AI Cleanup.
diff --git a/.git-blame-ignore-revs b/.git-blame-ignore-revs
index fbcc76a8654f7ed2241fb05c305eb466e3177c20..2650e36997655b1ab7376e8ed7052a8fc24b2fc6 100644
--- a/.git-blame-ignore-revs
+++ b/.git-blame-ignore-revs
@@ -34,3 +34,11 @@ ffdda588b41f7d9d270ffe76cab116f828ad545e
# 2024-07-24 docs: Format docs
# https://github.com/zed-industries/zed/pull/15352
3a44a59f8ec114ac1ba22f7da1652717ef7e4e5c
+
+# 2026-02-27 Format Tree-sitter query files
+# https://github.com/zed-industries/zed/pull/50138
+5ed538f49c54ca464bb9d1e59446060a3a925668
+
+# 2026-02-28 Format proto files
+# https://github.com/zed-industries/zed/pull/50413
+56a88a848be09cbcb66bcb3d85ec1f5644909f72
diff --git a/.github/CODEOWNERS.hold b/.github/CODEOWNERS.hold
index 449a5fd07315845787c9f2a73f0a0a22608e92c3..3d315b36401b2e27e29a2377aeabab8c09c75d39 100644
--- a/.github/CODEOWNERS.hold
+++ b/.github/CODEOWNERS.hold
@@ -62,8 +62,6 @@
/crates/rules_library/ @zed-industries/ai-team
# SUGGESTED: Review needed - based on Richard Feldman (2 commits)
/crates/shell_command_parser/ @zed-industries/ai-team
-/crates/supermaven/ @zed-industries/ai-team
-/crates/supermaven_api/ @zed-industries/ai-team
/crates/vercel/ @zed-industries/ai-team
/crates/x_ai/ @zed-industries/ai-team
/crates/zeta_prompt/ @zed-industries/ai-team
diff --git a/.github/ISSUE_TEMPLATE/10_bug_report.yml b/.github/ISSUE_TEMPLATE/10_bug_report.yml
index 13e43219dd65a78af4afec479330bbc5fd85fe42..5eb8e8a6299c5189384b6d060e12cd61a2249a3c 100644
--- a/.github/ISSUE_TEMPLATE/10_bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/10_bug_report.yml
@@ -100,7 +100,7 @@ body:
label: (for AI issues) Model provider details
placeholder: |
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc.)
- - Model Name: (Claude Sonnet 4.5, Gemini 3 Pro, GPT-5)
+ - Model Name: (Claude Sonnet 4.5, Gemini 3.1 Pro, GPT-5)
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
- Other details (ACPs, MCPs, other settings, etc.):
validations:
diff --git a/.github/workflows/add_commented_closed_issue_to_project.yml b/.github/workflows/add_commented_closed_issue_to_project.yml
index 5871f5ae0e61f97557ce926c4a2627841f50560d..bd84eaa9446e57c5482ab818df3dbcfe587e040e 100644
--- a/.github/workflows/add_commented_closed_issue_to_project.yml
+++ b/.github/workflows/add_commented_closed_issue_to_project.yml
@@ -63,13 +63,18 @@ jobs:
}
- if: steps.is-post-close-comment.outputs.result == 'true' && steps.check-staff.outputs.result == 'true'
+ env:
+ ISSUE_NUMBER: ${{ github.event.issue.number }}
run: |
- echo "::notice::Skipping issue #${{ github.event.issue.number }} - commenter is staff member"
+ echo "::notice::Skipping issue #$ISSUE_NUMBER - commenter is staff member"
# github-script outputs are JSON strings, so we compare against 'false' (string)
- if: steps.is-post-close-comment.outputs.result == 'true' && steps.check-staff.outputs.result == 'false'
+ env:
+ ISSUE_NUMBER: ${{ github.event.issue.number }}
+ COMMENT_USER_LOGIN: ${{ github.event.comment.user.login }}
run: |
- echo "::notice::Adding issue #${{ github.event.issue.number }} to project (comment by ${{ github.event.comment.user.login }})"
+ echo "::notice::Adding issue #$ISSUE_NUMBER to project (comment by $COMMENT_USER_LOGIN)"
- if: steps.is-post-close-comment.outputs.result == 'true' && steps.check-staff.outputs.result == 'false'
uses: actions/add-to-project@244f685bbc3b7adfa8466e08b698b5577571133e # v1.0.2
diff --git a/.github/workflows/after_release.yml b/.github/workflows/after_release.yml
index 9582e3f1956b3ecda383fc03efdb3d7ff67eaa68..95229f9f46bbd34ffe02832114b2b39da1b7e090 100644
--- a/.github/workflows/after_release.yml
+++ b/.github/workflows/after_release.yml
@@ -76,7 +76,7 @@ jobs:
"X-GitHub-Api-Version" = "2022-11-28"
}
$body = @{ branch = "master" } | ConvertTo-Json
- $uri = "https://api.github.com/repos/${{ github.repository_owner }}/winget-pkgs/merge-upstream"
+ $uri = "https://api.github.com/repos/$env:GITHUB_REPOSITORY_OWNER/winget-pkgs/merge-upstream"
try {
Invoke-RestMethod -Uri $uri -Method Post -Headers $headers -Body $body -ContentType "application/json"
Write-Host "Successfully synced winget-pkgs fork"
@@ -131,11 +131,10 @@ jobs:
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::send_slack_message
- run: |
- curl -X POST -H 'Content-type: application/json'\
- --data '{"text":"❌ ${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
+ run: 'curl -X POST -H ''Content-type: application/json'' --data "$(jq -n --arg text "$SLACK_MESSAGE" ''{"text": $text}'')" "$SLACK_WEBHOOK"'
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}
+ SLACK_MESSAGE: '❌ ${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}'
defaults:
run:
shell: bash -euxo pipefail {0}
diff --git a/.github/workflows/autofix_pr.yml b/.github/workflows/autofix_pr.yml
index 60cc66294af2cf65e17aaad530a9df511ec61503..1fa271d168a8c3d1744439647ff50b793a854d1d 100644
--- a/.github/workflows/autofix_pr.yml
+++ b/.github/workflows/autofix_pr.yml
@@ -22,8 +22,9 @@ jobs:
with:
clean: false
- name: autofix_pr::run_autofix::checkout_pr
- run: gh pr checkout ${{ inputs.pr_number }}
+ run: gh pr checkout "$PR_NUMBER"
env:
+ PR_NUMBER: ${{ inputs.pr_number }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: steps::setup_cargo_config
run: |
@@ -104,8 +105,9 @@ jobs:
clean: false
token: ${{ steps.get-app-token.outputs.token }}
- name: autofix_pr::commit_changes::checkout_pr
- run: gh pr checkout ${{ inputs.pr_number }}
+ run: gh pr checkout "$PR_NUMBER"
env:
+ PR_NUMBER: ${{ inputs.pr_number }}
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
- name: autofix_pr::download_patch_artifact
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53
diff --git a/.github/workflows/background_agent_mvp.yml b/.github/workflows/background_agent_mvp.yml
index d078db137824a09b8e501362edef8a2f4c6f9b19..528600138243cb8aca2e0fe0645eda198fc4f2b2 100644
--- a/.github/workflows/background_agent_mvp.yml
+++ b/.github/workflows/background_agent_mvp.yml
@@ -1,8 +1,11 @@
name: background_agent_mvp
+# NOTE: Scheduled runs disabled as of 2026-02-24. The workflow can still be
+# triggered manually via workflow_dispatch. See Notion doc "Background Agent
+# for Zed" for current status and contact info to resume this work.
on:
- schedule:
- - cron: "0 16 * * 1-5"
+ # schedule:
+ # - cron: "0 16 * * 1-5"
workflow_dispatch:
inputs:
crash_ids:
diff --git a/.github/workflows/catch_blank_issues.yml b/.github/workflows/catch_blank_issues.yml
index dd425afc886e86c1217a94e90eabced013f66bf0..c6f595ef2e0890ce107829f3e91490332567368a 100644
--- a/.github/workflows/catch_blank_issues.yml
+++ b/.github/workflows/catch_blank_issues.yml
@@ -42,8 +42,10 @@ jobs:
}
- if: steps.check-staff.outputs.result == 'true'
+ env:
+ ISSUE_NUMBER: ${{ github.event.issue.number }}
run: |
- echo "::notice::Skipping issue #${{ github.event.issue.number }} - actor is staff member"
+ echo "::notice::Skipping issue #$ISSUE_NUMBER - actor is staff member"
- if: steps.check-staff.outputs.result == 'false'
id: add-label
diff --git a/.github/workflows/cherry_pick.yml b/.github/workflows/cherry_pick.yml
index 9d46f300b509347b2853c00575c4e82fd9a2863c..ee0c1d35d0f9825d7c39b81fba0fe35901de2611 100644
--- a/.github/workflows/cherry_pick.yml
+++ b/.github/workflows/cherry_pick.yml
@@ -36,8 +36,11 @@ jobs:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: cherry_pick::run_cherry_pick::cherry_pick
- run: ./script/cherry-pick ${{ inputs.branch }} ${{ inputs.commit }} ${{ inputs.channel }}
+ run: ./script/cherry-pick "$BRANCH" "$COMMIT" "$CHANNEL"
env:
+ BRANCH: ${{ inputs.branch }}
+ COMMIT: ${{ inputs.commit }}
+ CHANNEL: ${{ inputs.channel }}
GIT_COMMITTER_NAME: Zed Zippy
GIT_COMMITTER_EMAIL: hi@zed.dev
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
diff --git a/.github/workflows/community_update_all_top_ranking_issues.yml b/.github/workflows/community_update_all_top_ranking_issues.yml
index 59926f35563a4b21e3486ecbd454a4ccf951461e..ef3b4fc39ddb5f0db9b09c5e861547ae8cd7eb08 100644
--- a/.github/workflows/community_update_all_top_ranking_issues.yml
+++ b/.github/workflows/community_update_all_top_ranking_issues.yml
@@ -22,4 +22,6 @@ jobs:
- name: Install dependencies
run: uv sync --project script/update_top_ranking_issues -p 3.13
- name: Run script
- run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 5393
+ env:
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token "$GITHUB_TOKEN" --issue-reference-number 5393
diff --git a/.github/workflows/community_update_weekly_top_ranking_issues.yml b/.github/workflows/community_update_weekly_top_ranking_issues.yml
index 75ba66b934b5861bd51aef4238a1a4188dddefc3..53b548f2bb4286e5de86d3823e67d75c0413a1cb 100644
--- a/.github/workflows/community_update_weekly_top_ranking_issues.yml
+++ b/.github/workflows/community_update_weekly_top_ranking_issues.yml
@@ -22,4 +22,6 @@ jobs:
- name: Install dependencies
run: uv sync --project script/update_top_ranking_issues -p 3.13
- name: Run script
- run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 6952 --query-day-interval 7
+ env:
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token "$GITHUB_TOKEN" --issue-reference-number 6952 --query-day-interval 7
diff --git a/.github/workflows/compare_perf.yml b/.github/workflows/compare_perf.yml
index e5a2d4f9c928eac2d1b1cf54ed374f8b0cca5d25..f7d78dbbf6a6d04bc47212b6842f894850288fcc 100644
--- a/.github/workflows/compare_perf.yml
+++ b/.github/workflows/compare_perf.yml
@@ -37,27 +37,40 @@ jobs:
- name: compare_perf::run_perf::install_hyperfine
uses: taiki-e/install-action@hyperfine
- name: steps::git_checkout
- run: git fetch origin ${{ inputs.base }} && git checkout ${{ inputs.base }}
+ run: git fetch origin "$REF_NAME" && git checkout "$REF_NAME"
+ env:
+ REF_NAME: ${{ inputs.base }}
- name: compare_perf::run_perf::cargo_perf_test
run: |2-
- if [ -n "${{ inputs.crate_name }}" ]; then
- cargo perf-test -p ${{ inputs.crate_name }} -- --json=${{ inputs.base }};
+ if [ -n "$CRATE_NAME" ]; then
+ cargo perf-test -p "$CRATE_NAME" -- --json="$REF_NAME";
else
- cargo perf-test -p vim -- --json=${{ inputs.base }};
+ cargo perf-test -p vim -- --json="$REF_NAME";
fi
+ env:
+ REF_NAME: ${{ inputs.base }}
+ CRATE_NAME: ${{ inputs.crate_name }}
- name: steps::git_checkout
- run: git fetch origin ${{ inputs.head }} && git checkout ${{ inputs.head }}
+ run: git fetch origin "$REF_NAME" && git checkout "$REF_NAME"
+ env:
+ REF_NAME: ${{ inputs.head }}
- name: compare_perf::run_perf::cargo_perf_test
run: |2-
- if [ -n "${{ inputs.crate_name }}" ]; then
- cargo perf-test -p ${{ inputs.crate_name }} -- --json=${{ inputs.head }};
+ if [ -n "$CRATE_NAME" ]; then
+ cargo perf-test -p "$CRATE_NAME" -- --json="$REF_NAME";
else
- cargo perf-test -p vim -- --json=${{ inputs.head }};
+ cargo perf-test -p vim -- --json="$REF_NAME";
fi
+ env:
+ REF_NAME: ${{ inputs.head }}
+ CRATE_NAME: ${{ inputs.crate_name }}
- name: compare_perf::run_perf::compare_runs
- run: cargo perf-compare --save=results.md ${{ inputs.base }} ${{ inputs.head }}
+ run: cargo perf-compare --save=results.md "$BASE" "$HEAD"
+ env:
+ BASE: ${{ inputs.base }}
+ HEAD: ${{ inputs.head }}
- name: '@actions/upload-artifact results.md'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
diff --git a/.github/workflows/deploy_cloudflare.yml b/.github/workflows/deploy_cloudflare.yml
index 2650cce1406b16e691565077b95d07730845664b..37f23b20d2825e9f3d26c456903962a10c2d0081 100644
--- a/.github/workflows/deploy_cloudflare.yml
+++ b/.github/workflows/deploy_cloudflare.yml
@@ -23,7 +23,10 @@ jobs:
- name: Build docs
uses: ./.github/actions/build_docs
env:
+ CC: clang
+ CXX: clang++
DOCS_AMPLITUDE_API_KEY: ${{ secrets.DOCS_AMPLITUDE_API_KEY }}
+ DOCS_CONSENT_IO_INSTANCE: ${{ secrets.DOCS_CONSENT_IO_INSTANCE }}
- name: Deploy Docs
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
diff --git a/.github/workflows/deploy_collab.yml b/.github/workflows/deploy_collab.yml
index b1bdaf61979452a73380226ce1935b43eb05c32b..89fb6980b65f2d09a6571f140ab016a710be230f 100644
--- a/.github/workflows/deploy_collab.yml
+++ b/.github/workflows/deploy_collab.yml
@@ -119,8 +119,9 @@ jobs:
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: deploy_collab::deploy::sign_into_kubernetes
- run: |
- doctl kubernetes cluster kubeconfig save --expiry-seconds 600 ${{ secrets.CLUSTER_NAME }}
+ run: doctl kubernetes cluster kubeconfig save --expiry-seconds 600 "$CLUSTER_NAME"
+ env:
+ CLUSTER_NAME: ${{ secrets.CLUSTER_NAME }}
- name: deploy_collab::deploy::start_rollout
run: |
set -eu
@@ -140,7 +141,7 @@ jobs:
echo "Deploying collab:$GITHUB_SHA to $ZED_KUBE_NAMESPACE"
source script/lib/deploy-helpers.sh
- export_vars_for_environment $ZED_KUBE_NAMESPACE
+ export_vars_for_environment "$ZED_KUBE_NAMESPACE"
ZED_DO_CERTIFICATE_ID="$(doctl compute certificate list --format ID --no-header)"
export ZED_DO_CERTIFICATE_ID
@@ -150,14 +151,14 @@ jobs:
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT
export DATABASE_MAX_CONNECTIONS=850
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
- kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
+ kubectl -n "$ZED_KUBE_NAMESPACE" rollout status "deployment/$ZED_SERVICE_NAME" --watch
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"
export ZED_SERVICE_NAME=api
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_API_LOAD_BALANCER_SIZE_UNIT
export DATABASE_MAX_CONNECTIONS=60
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
- kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
+ kubectl -n "$ZED_KUBE_NAMESPACE" rollout status "deployment/$ZED_SERVICE_NAME" --watch
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"
defaults:
run:
diff --git a/.github/workflows/extension_bump.yml b/.github/workflows/extension_bump.yml
index ff903eb63d30319b5df5ced9c0ec545bb15cca06..9cc53741e8007a1b3ddd02ad07b191b3ce171cc8 100644
--- a/.github/workflows/extension_bump.yml
+++ b/.github/workflows/extension_bump.yml
@@ -39,8 +39,8 @@ jobs:
run: |
CURRENT_VERSION="$(sed -n 's/^version = \"\(.*\)\"/\1/p' < extension.toml | tr -d '[:space:]')"
- if [[ "${{ github.event_name }}" == "pull_request" ]]; then
- PR_FORK_POINT="$(git merge-base --fork-point main)"
+ if [[ "$GITHUB_EVENT_NAME" == "pull_request" ]]; then
+ PR_FORK_POINT="$(git merge-base origin/main HEAD)"
git checkout "$PR_FORK_POINT"
elif BRANCH_PARENT_SHA="$(git merge-base origin/main origin/zed-zippy-autobump)"; then
git checkout "$BRANCH_PARENT_SHA"
@@ -64,7 +64,7 @@ jobs:
- check_version_changed
if: |-
(github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') &&
- (inputs.force-bump == 'true' || needs.check_version_changed.outputs.version_changed == 'false')
+ (inputs.force-bump == true || needs.check_version_changed.outputs.version_changed == 'false')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: generate-token
@@ -82,8 +82,6 @@ jobs:
- id: bump-version
name: extension_bump::bump_version
run: |
- OLD_VERSION="${{ needs.check_version_changed.outputs.current_version }}"
-
BUMP_FILES=("extension.toml")
if [[ -f "Cargo.toml" ]]; then
BUMP_FILES+=("Cargo.toml")
@@ -93,7 +91,7 @@ jobs:
--search "version = \"{current_version}"\" \
--replace "version = \"{new_version}"\" \
--current-version "$OLD_VERSION" \
- --no-configured-files ${{ inputs.bump-type }} "${BUMP_FILES[@]}"
+ --no-configured-files "$BUMP_TYPE" "${BUMP_FILES[@]}"
if [[ -f "Cargo.toml" ]]; then
cargo update --workspace
@@ -102,6 +100,9 @@ jobs:
NEW_VERSION="$(sed -n 's/^version = \"\(.*\)\"/\1/p' < extension.toml | tr -d '[:space:]')"
echo "new_version=${NEW_VERSION}" >> "$GITHUB_OUTPUT"
+ env:
+ OLD_VERSION: ${{ needs.check_version_changed.outputs.current_version }}
+ BUMP_TYPE: ${{ inputs.bump-type }}
- name: extension_bump::create_pull_request
uses: peter-evans/create-pull-request@v7
with:
diff --git a/.github/workflows/extension_tests.yml b/.github/workflows/extension_tests.yml
index c74dcdab8df2bb7d22ab403cfe25090e9d1bd512..53de373c1b79dc3ca9a3637642e10998c781580a 100644
--- a/.github/workflows/extension_tests.yml
+++ b/.github/workflows/extension_tests.yml
@@ -32,7 +32,7 @@ jobs:
git fetch origin "$GITHUB_BASE_REF" --depth=350
COMPARE_REV="$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)"
fi
- CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" ${{ github.sha }})"
+ CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" "$GITHUB_SHA")"
check_pattern() {
local output_name="$1"
@@ -109,13 +109,28 @@ jobs:
mkdir -p /tmp/ext-scratch
mkdir -p /tmp/ext-output
./zed-extension --source-dir . --scratch-dir /tmp/ext-scratch --output-dir /tmp/ext-output
+ - name: run_tests::fetch_ts_query_ls
+ uses: dsaltares/fetch-gh-release-asset@aa37ae5c44d3c9820bc12fe675e8670ecd93bd1c
+ with:
+ repo: ribru17/ts_query_ls
+ version: tags/v3.15.1
+ file: ts_query_ls-x86_64-unknown-linux-gnu.tar.gz
+ - name: run_tests::run_ts_query_ls
+ run: |-
+ tar -xf ts_query_ls-x86_64-unknown-linux-gnu.tar.gz
+ ./ts_query_ls format --check . || {
+ echo "Found unformatted queries, please format them with ts_query_ls."
+ echo "For easy use, install the Tree-sitter query extension:"
+ echo "zed://extension/tree-sitter-query"
+ false
+ }
- id: compare-versions-check
name: extension_bump::compare_versions
run: |
CURRENT_VERSION="$(sed -n 's/^version = \"\(.*\)\"/\1/p' < extension.toml | tr -d '[:space:]')"
- if [[ "${{ github.event_name }}" == "pull_request" ]]; then
- PR_FORK_POINT="$(git merge-base --fork-point main)"
+ if [[ "$GITHUB_EVENT_NAME" == "pull_request" ]]; then
+ PR_FORK_POINT="$(git merge-base origin/main HEAD)"
git checkout "$PR_FORK_POINT"
elif BRANCH_PARENT_SHA="$(git merge-base origin/main origin/zed-zippy-autobump)"; then
git checkout "$BRANCH_PARENT_SHA"
@@ -132,11 +147,14 @@ jobs:
echo "current_version=${CURRENT_VERSION}" >> "$GITHUB_OUTPUT"
- name: extension_tests::verify_version_did_not_change
run: |
- if [[ ${{ steps.compare-versions-check.outputs.version_changed }} == "true" && "${{ github.event_name }}" == "pull_request" && "${{ github.event.pull_request.user.login }}" != "zed-zippy[bot]" ]] ; then
+ if [[ "$VERSION_CHANGED" == "true" && "$GITHUB_EVENT_NAME" == "pull_request" && "$PR_USER_LOGIN" != "zed-zippy[bot]" ]] ; then
echo "Version change detected in your change!"
echo "Version changes happen in separate PRs and will be performed by the zed-zippy bot"
exit 42
fi
+ env:
+ VERSION_CHANGED: ${{ steps.compare-versions-check.outputs.version_changed }}
+ PR_USER_LOGIN: ${{ github.event.pull_request.user.login }}
timeout-minutes: 6
tests_pass:
needs:
@@ -156,11 +174,15 @@ jobs:
if [[ "$2" != "skipped" && "$2" != "success" ]]; then EXIT_CODE=1; fi
}
- check_result "orchestrate" "${{ needs.orchestrate.result }}"
- check_result "check_rust" "${{ needs.check_rust.result }}"
- check_result "check_extension" "${{ needs.check_extension.result }}"
+ check_result "orchestrate" "$RESULT_ORCHESTRATE"
+ check_result "check_rust" "$RESULT_CHECK_RUST"
+ check_result "check_extension" "$RESULT_CHECK_EXTENSION"
exit $EXIT_CODE
+ env:
+ RESULT_ORCHESTRATE: ${{ needs.orchestrate.result }}
+ RESULT_CHECK_RUST: ${{ needs.check_rust.result }}
+ RESULT_CHECK_EXTENSION: ${{ needs.check_extension.result }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
diff --git a/.github/workflows/extension_workflow_rollout.yml b/.github/workflows/extension_workflow_rollout.yml
index 109f40c815dbf5222bef7b7d78d544f2b278c21f..9bfac06d4527985553ba3d04e64c656ee5bf85e4 100644
--- a/.github/workflows/extension_workflow_rollout.yml
+++ b/.github/workflows/extension_workflow_rollout.yml
@@ -80,9 +80,7 @@ jobs:
- id: calc-changes
name: extension_workflow_rollout::rollout_workflows_to_extension::get_removed_files
run: |
- PREV_COMMIT="${{ steps.prev-tag.outputs.prev_commit }}"
-
- if [ "${{ matrix.repo }}" = "workflows" ]; then
+ if [ "$MATRIX_REPO" = "workflows" ]; then
WORKFLOW_DIR="extensions/workflows"
else
WORKFLOW_DIR="extensions/workflows/shared"
@@ -101,11 +99,12 @@ jobs:
echo "Files to remove: $REMOVED_FILES"
echo "removed_files=$REMOVED_FILES" >> "$GITHUB_OUTPUT"
+ env:
+ PREV_COMMIT: ${{ steps.prev-tag.outputs.prev_commit }}
+ MATRIX_REPO: ${{ matrix.repo }}
working-directory: zed
- name: extension_workflow_rollout::rollout_workflows_to_extension::sync_workflow_files
run: |
- REMOVED_FILES="${{ steps.calc-changes.outputs.removed_files }}"
-
mkdir -p extension/.github/workflows
cd extension/.github/workflows
@@ -119,15 +118,18 @@ jobs:
cd - > /dev/null
- if [ "${{ matrix.repo }}" = "workflows" ]; then
+ if [ "$MATRIX_REPO" = "workflows" ]; then
cp zed/extensions/workflows/*.yml extension/.github/workflows/
else
cp zed/extensions/workflows/shared/*.yml extension/.github/workflows/
fi
+ env:
+ REMOVED_FILES: ${{ steps.calc-changes.outputs.removed_files }}
+ MATRIX_REPO: ${{ matrix.repo }}
- id: short-sha
name: extension_workflow_rollout::rollout_workflows_to_extension::get_short_sha
run: |
- echo "sha_short=$(git rev-parse --short HEAD)" >> "$GITHUB_OUTPUT"
+ echo "sha_short=$(git rev-parse --short=7 HEAD)" >> "$GITHUB_OUTPUT"
working-directory: zed
- id: create-pr
name: extension_workflow_rollout::rollout_workflows_to_extension::create_pull_request
@@ -148,13 +150,13 @@ jobs:
sign-commits: true
- name: extension_workflow_rollout::rollout_workflows_to_extension::enable_auto_merge
run: |
- PR_NUMBER="${{ steps.create-pr.outputs.pull-request-number }}"
if [ -n "$PR_NUMBER" ]; then
cd extension
gh pr merge "$PR_NUMBER" --auto --squash
fi
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
+ PR_NUMBER: ${{ steps.create-pr.outputs.pull-request-number }}
timeout-minutes: 10
create_rollout_tag:
needs:
diff --git a/.github/workflows/publish_extension_cli.yml b/.github/workflows/publish_extension_cli.yml
index 391baac1cb3aa9da76c4fde39aa6909525541a58..75f1b16b007e33d0c4f346a33a1403648f1cd6c6 100644
--- a/.github/workflows/publish_extension_cli.yml
+++ b/.github/workflows/publish_extension_cli.yml
@@ -27,7 +27,7 @@ jobs:
- name: publish_extension_cli::publish_job::build_extension_cli
run: cargo build --release --package extension_cli
- name: publish_extension_cli::publish_job::upload_binary
- run: script/upload-extension-cli ${{ github.sha }}
+ run: script/upload-extension-cli "$GITHUB_SHA"
env:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
@@ -55,10 +55,10 @@ jobs:
- id: short-sha
name: publish_extension_cli::get_short_sha
run: |
- echo "sha_short=$(echo "${{ github.sha }}" | cut -c1-7)" >> "$GITHUB_OUTPUT"
+ echo "sha_short=$(echo "$GITHUB_SHA" | cut -c1-7)" >> "$GITHUB_OUTPUT"
- name: publish_extension_cli::update_sha_in_zed::replace_sha
run: |
- sed -i "s/ZED_EXTENSION_CLI_SHA: &str = \"[a-f0-9]*\"/ZED_EXTENSION_CLI_SHA: \&str = \"${{ github.sha }}\"/" \
+ sed -i "s/ZED_EXTENSION_CLI_SHA: &str = \"[a-f0-9]*\"/ZED_EXTENSION_CLI_SHA: \&str = \"$GITHUB_SHA\"/" \
tooling/xtask/src/tasks/workflows/extension_tests.rs
- name: publish_extension_cli::update_sha_in_zed::regenerate_workflows
run: cargo xtask workflows
@@ -97,7 +97,7 @@ jobs:
- id: short-sha
name: publish_extension_cli::get_short_sha
run: |
- echo "sha_short=$(echo "${{ github.sha }}" | cut -c1-7)" >> "$GITHUB_OUTPUT"
+ echo "sha_short=$(echo "$GITHUB_SHA" | cut -c1-7)" >> "$GITHUB_OUTPUT"
- name: publish_extension_cli::update_sha_in_extensions::checkout_extensions_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
@@ -105,7 +105,7 @@ jobs:
token: ${{ steps.generate-token.outputs.token }}
- name: publish_extension_cli::update_sha_in_extensions::replace_sha
run: |
- sed -i "s/ZED_EXTENSION_CLI_SHA: [a-f0-9]*/ZED_EXTENSION_CLI_SHA: ${{ github.sha }}/" \
+ sed -i "s/ZED_EXTENSION_CLI_SHA: [a-f0-9]*/ZED_EXTENSION_CLI_SHA: $GITHUB_SHA/" \
.github/workflows/ci.yml
- name: publish_extension_cli::create_pull_request_extensions
uses: peter-evans/create-pull-request@v7
diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
index 4442b068a88800e8437d5c6e459acec954308946..8adad5cfba278dc68dd227b86455510278c7a1ae 100644
--- a/.github/workflows/release.yml
+++ b/.github/workflows/release.yml
@@ -53,6 +53,9 @@ jobs:
run_tests_linux:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-16x32-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -177,6 +180,9 @@ jobs:
clippy_linux:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-16x32-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -251,8 +257,14 @@ jobs:
name: run_tests::check_scripts::download_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
- name: run_tests::check_scripts::run_actionlint
- run: |
- ${{ steps.get_actionlint.outputs.executable }} -color
+ run: '"$ACTIONLINT_BIN" -color'
+ env:
+ ACTIONLINT_BIN: ${{ steps.get_actionlint.outputs.executable }}
+ - name: steps::cache_rust_dependencies_namespace
+ uses: namespacelabs/nscloud-cache-action@v1
+ with:
+ cache: rust
+ path: ~/.rustup
- name: run_tests::check_scripts::check_xtask_workflows
run: |
cargo xtask workflows
@@ -293,6 +305,8 @@ jobs:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
+ CC: clang-18
+ CXX: clang++-18
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -333,6 +347,8 @@ jobs:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
+ CC: clang-18
+ CXX: clang++-18
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -644,12 +660,7 @@ jobs:
- id: generate-webhook-message
name: release::generate_slack_message
run: |
- MESSAGE=$(DRAFT_RESULT="${{ needs.create_draft_release.result }}"
- UPLOAD_RESULT="${{ needs.upload_release_assets.result }}"
- VALIDATE_RESULT="${{ needs.validate_release_assets.result }}"
- AUTO_RELEASE_RESULT="${{ needs.auto_release_preview.result }}"
- TAG="$GITHUB_REF_NAME"
- RUN_URL="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
+ MESSAGE=$(TAG="$GITHUB_REF_NAME"
if [ "$DRAFT_RESULT" == "failure" ]; then
echo "❌ Draft release creation failed for $TAG: $RUN_URL"
@@ -659,19 +670,19 @@ jobs:
echo "❌ Release asset upload failed for $TAG: $RELEASE_URL"
elif [ "$UPLOAD_RESULT" == "cancelled" ] || [ "$UPLOAD_RESULT" == "skipped" ]; then
FAILED_JOBS=""
- if [ "${{ needs.run_tests_mac.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS run_tests_mac"; fi
- if [ "${{ needs.run_tests_linux.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS run_tests_linux"; fi
- if [ "${{ needs.run_tests_windows.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS run_tests_windows"; fi
- if [ "${{ needs.clippy_mac.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS clippy_mac"; fi
- if [ "${{ needs.clippy_linux.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS clippy_linux"; fi
- if [ "${{ needs.clippy_windows.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS clippy_windows"; fi
- if [ "${{ needs.check_scripts.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS check_scripts"; fi
- if [ "${{ needs.bundle_linux_aarch64.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_linux_aarch64"; fi
- if [ "${{ needs.bundle_linux_x86_64.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_linux_x86_64"; fi
- if [ "${{ needs.bundle_mac_aarch64.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_mac_aarch64"; fi
- if [ "${{ needs.bundle_mac_x86_64.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_mac_x86_64"; fi
- if [ "${{ needs.bundle_windows_aarch64.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_windows_aarch64"; fi
- if [ "${{ needs.bundle_windows_x86_64.result }}" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_windows_x86_64"; fi
+ if [ "$RESULT_RUN_TESTS_MAC" == "failure" ];then FAILED_JOBS="$FAILED_JOBS run_tests_mac"; fi
+ if [ "$RESULT_RUN_TESTS_LINUX" == "failure" ];then FAILED_JOBS="$FAILED_JOBS run_tests_linux"; fi
+ if [ "$RESULT_RUN_TESTS_WINDOWS" == "failure" ];then FAILED_JOBS="$FAILED_JOBS run_tests_windows"; fi
+ if [ "$RESULT_CLIPPY_MAC" == "failure" ];then FAILED_JOBS="$FAILED_JOBS clippy_mac"; fi
+ if [ "$RESULT_CLIPPY_LINUX" == "failure" ];then FAILED_JOBS="$FAILED_JOBS clippy_linux"; fi
+ if [ "$RESULT_CLIPPY_WINDOWS" == "failure" ];then FAILED_JOBS="$FAILED_JOBS clippy_windows"; fi
+ if [ "$RESULT_CHECK_SCRIPTS" == "failure" ];then FAILED_JOBS="$FAILED_JOBS check_scripts"; fi
+ if [ "$RESULT_BUNDLE_LINUX_AARCH64" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_linux_aarch64"; fi
+ if [ "$RESULT_BUNDLE_LINUX_X86_64" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_linux_x86_64"; fi
+ if [ "$RESULT_BUNDLE_MAC_AARCH64" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_mac_aarch64"; fi
+ if [ "$RESULT_BUNDLE_MAC_X86_64" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_mac_x86_64"; fi
+ if [ "$RESULT_BUNDLE_WINDOWS_AARCH64" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_windows_aarch64"; fi
+ if [ "$RESULT_BUNDLE_WINDOWS_X86_64" == "failure" ];then FAILED_JOBS="$FAILED_JOBS bundle_windows_x86_64"; fi
FAILED_JOBS=$(echo "$FAILED_JOBS" | xargs)
if [ "$UPLOAD_RESULT" == "cancelled" ]; then
if [ -n "$FAILED_JOBS" ]; then
@@ -700,12 +711,29 @@ jobs:
echo "message=$MESSAGE" >> "$GITHUB_OUTPUT"
env:
GH_TOKEN: ${{ github.token }}
+ DRAFT_RESULT: ${{ needs.create_draft_release.result }}
+ UPLOAD_RESULT: ${{ needs.upload_release_assets.result }}
+ VALIDATE_RESULT: ${{ needs.validate_release_assets.result }}
+ AUTO_RELEASE_RESULT: ${{ needs.auto_release_preview.result }}
+ RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
+ RESULT_RUN_TESTS_MAC: ${{ needs.run_tests_mac.result }}
+ RESULT_RUN_TESTS_LINUX: ${{ needs.run_tests_linux.result }}
+ RESULT_RUN_TESTS_WINDOWS: ${{ needs.run_tests_windows.result }}
+ RESULT_CLIPPY_MAC: ${{ needs.clippy_mac.result }}
+ RESULT_CLIPPY_LINUX: ${{ needs.clippy_linux.result }}
+ RESULT_CLIPPY_WINDOWS: ${{ needs.clippy_windows.result }}
+ RESULT_CHECK_SCRIPTS: ${{ needs.check_scripts.result }}
+ RESULT_BUNDLE_LINUX_AARCH64: ${{ needs.bundle_linux_aarch64.result }}
+ RESULT_BUNDLE_LINUX_X86_64: ${{ needs.bundle_linux_x86_64.result }}
+ RESULT_BUNDLE_MAC_AARCH64: ${{ needs.bundle_mac_aarch64.result }}
+ RESULT_BUNDLE_MAC_X86_64: ${{ needs.bundle_mac_x86_64.result }}
+ RESULT_BUNDLE_WINDOWS_AARCH64: ${{ needs.bundle_windows_aarch64.result }}
+ RESULT_BUNDLE_WINDOWS_X86_64: ${{ needs.bundle_windows_x86_64.result }}
- name: release::send_slack_message
- run: |
- curl -X POST -H 'Content-type: application/json'\
- --data '{"text":"${{ steps.generate-webhook-message.outputs.message }}"}' "$SLACK_WEBHOOK"
+ run: 'curl -X POST -H ''Content-type: application/json'' --data "$(jq -n --arg text "$SLACK_MESSAGE" ''{"text": $text}'')" "$SLACK_WEBHOOK"'
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}
+ SLACK_MESSAGE: ${{ steps.generate-webhook-message.outputs.message }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
diff --git a/.github/workflows/release_nightly.yml b/.github/workflows/release_nightly.yml
index d3f01447e52f418713499b84ad454085fd3cb646..46d8732b08ea658275e1fb21117a09b9e0668933 100644
--- a/.github/workflows/release_nightly.yml
+++ b/.github/workflows/release_nightly.yml
@@ -103,6 +103,8 @@ jobs:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
+ CC: clang-18
+ CXX: clang++-18
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -149,6 +151,8 @@ jobs:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
+ CC: clang-18
+ CXX: clang++-18
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -550,11 +554,10 @@ jobs:
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::send_slack_message
- run: |
- curl -X POST -H 'Content-type: application/json'\
- --data '{"text":"❌ ${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
+ run: 'curl -X POST -H ''Content-type: application/json'' --data "$(jq -n --arg text "$SLACK_MESSAGE" ''{"text": $text}'')" "$SLACK_WEBHOOK"'
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}
+ SLACK_MESSAGE: '❌ ${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}'
defaults:
run:
shell: bash -euxo pipefail {0}
diff --git a/.github/workflows/run_bundling.yml b/.github/workflows/run_bundling.yml
index 2b536425a1dc4b9663c726fd9259c95e0626efda..7cb1665f9d0bd4fe3b0f3c05527bf39aab5f610a 100644
--- a/.github/workflows/run_bundling.yml
+++ b/.github/workflows/run_bundling.yml
@@ -19,6 +19,8 @@ jobs:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
+ CC: clang-18
+ CXX: clang++-18
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -58,6 +60,8 @@ jobs:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
+ CC: clang-18
+ CXX: clang++-18
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
diff --git a/.github/workflows/run_cron_unit_evals.yml b/.github/workflows/run_cron_unit_evals.yml
index e57b54e4f2249b92630b2d3636ce2316a0814625..2a204a9d40d78bf52f38825b4db060216e348a87 100644
--- a/.github/workflows/run_cron_unit_evals.yml
+++ b/.github/workflows/run_cron_unit_evals.yml
@@ -16,7 +16,7 @@ jobs:
model:
- anthropic/claude-sonnet-4-5-latest
- anthropic/claude-opus-4-5-latest
- - google/gemini-3-pro
+ - google/gemini-3.1-pro
- openai/gpt-5
fail-fast: false
steps:
diff --git a/.github/workflows/run_tests.yml b/.github/workflows/run_tests.yml
index 07caa6007e87fcd093b40bb9a15108e18b159068..00d69639a53868386157e67aeab5ce7383d32426 100644
--- a/.github/workflows/run_tests.yml
+++ b/.github/workflows/run_tests.yml
@@ -35,7 +35,7 @@ jobs:
git fetch origin "$GITHUB_BASE_REF" --depth=350
COMPARE_REV="$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)"
fi
- CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" ${{ github.sha }})"
+ CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" "$GITHUB_SHA")"
check_pattern() {
local output_name="$1"
@@ -139,6 +139,21 @@ jobs:
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06
with:
config: ./typos.toml
+ - name: run_tests::fetch_ts_query_ls
+ uses: dsaltares/fetch-gh-release-asset@aa37ae5c44d3c9820bc12fe675e8670ecd93bd1c
+ with:
+ repo: ribru17/ts_query_ls
+ version: tags/v3.15.1
+ file: ts_query_ls-x86_64-unknown-linux-gnu.tar.gz
+ - name: run_tests::run_ts_query_ls
+ run: |-
+ tar -xf ts_query_ls-x86_64-unknown-linux-gnu.tar.gz
+ ./ts_query_ls format --check . || {
+ echo "Found unformatted queries, please format them with ts_query_ls."
+ echo "For easy use, install the Tree-sitter query extension:"
+ echo "zed://extension/tree-sitter-query"
+ false
+ }
timeout-minutes: 60
clippy_windows:
needs:
@@ -175,6 +190,9 @@ jobs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-16x32-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -285,6 +303,9 @@ jobs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-16x32-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -385,6 +406,9 @@ jobs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-16x32-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -428,6 +452,9 @@ jobs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-8x16-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -466,11 +493,53 @@ jobs:
run: |
rm -rf ./../.cargo
timeout-minutes: 60
+ check_wasm:
+ needs:
+ - orchestrate
+ if: needs.orchestrate.outputs.run_tests == 'true'
+ runs-on: namespace-profile-8x16-ubuntu-2204
+ steps:
+ - name: steps::checkout_repo
+ uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
+ with:
+ clean: false
+ - name: steps::setup_cargo_config
+ run: |
+ mkdir -p ./../.cargo
+ cp ./.cargo/ci-config.toml ./../.cargo/config.toml
+ - name: steps::cache_rust_dependencies_namespace
+ uses: namespacelabs/nscloud-cache-action@v1
+ with:
+ cache: rust
+ path: ~/.rustup
+ - name: run_tests::check_wasm::install_nightly_wasm_toolchain
+ run: rustup toolchain install nightly --component rust-src --target wasm32-unknown-unknown
+ - name: steps::setup_sccache
+ run: ./script/setup-sccache
+ env:
+ R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
+ R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
+ R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
+ SCCACHE_BUCKET: sccache-zed
+ - name: run_tests::check_wasm::cargo_check_wasm
+ run: cargo +nightly -Zbuild-std=std,panic_abort check --target wasm32-unknown-unknown -p gpui_platform
+ env:
+ CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_RUSTFLAGS: -C target-feature=+atomics,+bulk-memory,+mutable-globals
+ - name: steps::show_sccache_stats
+ run: sccache --show-stats || true
+ - name: steps::cleanup_cargo_config
+ if: always()
+ run: |
+ rm -rf ./../.cargo
+ timeout-minutes: 60
check_dependencies:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-2x4-ubuntu-2404
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -503,6 +572,9 @@ jobs:
- orchestrate
if: needs.orchestrate.outputs.run_docs == 'true'
runs-on: namespace-profile-8x16-ubuntu-2204
+ env:
+ CC: clang
+ CXX: clang++
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -581,8 +653,14 @@ jobs:
name: run_tests::check_scripts::download_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
- name: run_tests::check_scripts::run_actionlint
- run: |
- ${{ steps.get_actionlint.outputs.executable }} -color
+ run: '"$ACTIONLINT_BIN" -color'
+ env:
+ ACTIONLINT_BIN: ${{ steps.get_actionlint.outputs.executable }}
+ - name: steps::cache_rust_dependencies_namespace
+ uses: namespacelabs/nscloud-cache-action@v1
+ with:
+ cache: rust
+ path: ~/.rustup
- name: run_tests::check_scripts::check_xtask_workflows
run: |
cargo xtask workflows
@@ -628,6 +706,10 @@ jobs:
with:
input: crates/proto/proto/
against: https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/
+ - name: run_tests::check_postgres_and_protobuf_migrations::buf_lint
+ run: buf lint crates/proto/proto
+ - name: run_tests::check_postgres_and_protobuf_migrations::check_protobuf_formatting
+ run: buf format --diff --exit-code crates/proto/proto
timeout-minutes: 60
tests_pass:
needs:
@@ -641,6 +723,7 @@ jobs:
- run_tests_mac
- doctests
- check_workspace_binaries
+ - check_wasm
- check_dependencies
- check_docs
- check_licenses
@@ -658,22 +741,39 @@ jobs:
if [[ "$2" != "skipped" && "$2" != "success" ]]; then EXIT_CODE=1; fi
}
- check_result "orchestrate" "${{ needs.orchestrate.result }}"
- check_result "check_style" "${{ needs.check_style.result }}"
- check_result "clippy_windows" "${{ needs.clippy_windows.result }}"
- check_result "clippy_linux" "${{ needs.clippy_linux.result }}"
- check_result "clippy_mac" "${{ needs.clippy_mac.result }}"
- check_result "run_tests_windows" "${{ needs.run_tests_windows.result }}"
- check_result "run_tests_linux" "${{ needs.run_tests_linux.result }}"
- check_result "run_tests_mac" "${{ needs.run_tests_mac.result }}"
- check_result "doctests" "${{ needs.doctests.result }}"
- check_result "check_workspace_binaries" "${{ needs.check_workspace_binaries.result }}"
- check_result "check_dependencies" "${{ needs.check_dependencies.result }}"
- check_result "check_docs" "${{ needs.check_docs.result }}"
- check_result "check_licenses" "${{ needs.check_licenses.result }}"
- check_result "check_scripts" "${{ needs.check_scripts.result }}"
+ check_result "orchestrate" "$RESULT_ORCHESTRATE"
+ check_result "check_style" "$RESULT_CHECK_STYLE"
+ check_result "clippy_windows" "$RESULT_CLIPPY_WINDOWS"
+ check_result "clippy_linux" "$RESULT_CLIPPY_LINUX"
+ check_result "clippy_mac" "$RESULT_CLIPPY_MAC"
+ check_result "run_tests_windows" "$RESULT_RUN_TESTS_WINDOWS"
+ check_result "run_tests_linux" "$RESULT_RUN_TESTS_LINUX"
+ check_result "run_tests_mac" "$RESULT_RUN_TESTS_MAC"
+ check_result "doctests" "$RESULT_DOCTESTS"
+ check_result "check_workspace_binaries" "$RESULT_CHECK_WORKSPACE_BINARIES"
+ check_result "check_wasm" "$RESULT_CHECK_WASM"
+ check_result "check_dependencies" "$RESULT_CHECK_DEPENDENCIES"
+ check_result "check_docs" "$RESULT_CHECK_DOCS"
+ check_result "check_licenses" "$RESULT_CHECK_LICENSES"
+ check_result "check_scripts" "$RESULT_CHECK_SCRIPTS"
exit $EXIT_CODE
+ env:
+ RESULT_ORCHESTRATE: ${{ needs.orchestrate.result }}
+ RESULT_CHECK_STYLE: ${{ needs.check_style.result }}
+ RESULT_CLIPPY_WINDOWS: ${{ needs.clippy_windows.result }}
+ RESULT_CLIPPY_LINUX: ${{ needs.clippy_linux.result }}
+ RESULT_CLIPPY_MAC: ${{ needs.clippy_mac.result }}
+ RESULT_RUN_TESTS_WINDOWS: ${{ needs.run_tests_windows.result }}
+ RESULT_RUN_TESTS_LINUX: ${{ needs.run_tests_linux.result }}
+ RESULT_RUN_TESTS_MAC: ${{ needs.run_tests_mac.result }}
+ RESULT_DOCTESTS: ${{ needs.doctests.result }}
+ RESULT_CHECK_WORKSPACE_BINARIES: ${{ needs.check_workspace_binaries.result }}
+ RESULT_CHECK_WASM: ${{ needs.check_wasm.result }}
+ RESULT_CHECK_DEPENDENCIES: ${{ needs.check_dependencies.result }}
+ RESULT_CHECK_DOCS: ${{ needs.check_docs.result }}
+ RESULT_CHECK_LICENSES: ${{ needs.check_licenses.result }}
+ RESULT_CHECK_SCRIPTS: ${{ needs.check_scripts.result }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
diff --git a/.github/workflows/slack_notify_first_responders.yml b/.github/workflows/slack_notify_first_responders.yml
index a6f2d557a574778aea6c2a90f9721b5a41bd0724..538d02b582f18db627693b62e439f4142ea29056 100644
--- a/.github/workflows/slack_notify_first_responders.yml
+++ b/.github/workflows/slack_notify_first_responders.yml
@@ -17,8 +17,9 @@ jobs:
id: check-label
env:
LABEL_NAME: ${{ github.event.label.name }}
+ FIRST_RESPONDER_LABELS: ${{ env.FIRST_RESPONDER_LABELS }}
run: |
- if echo '${{ env.FIRST_RESPONDER_LABELS }}' | jq -e --arg label "$LABEL_NAME" 'index($label) != null' > /dev/null; then
+ if echo "$FIRST_RESPONDER_LABELS" | jq -e --arg label "$LABEL_NAME" 'index($label) != null' > /dev/null; then
echo "should_notify=true" >> "$GITHUB_OUTPUT"
echo "Label '$LABEL_NAME' requires first responder notification"
else
diff --git a/.github/workflows/update_duplicate_magnets.yml b/.github/workflows/update_duplicate_magnets.yml
index 1c6c5a562532891eb97ceb11f44b81f35612c026..c3832b7bdbec13f74a8136cb1120a682f6e53920 100644
--- a/.github/workflows/update_duplicate_magnets.yml
+++ b/.github/workflows/update_duplicate_magnets.yml
@@ -21,7 +21,9 @@ jobs:
run: pip install requests
- name: Update duplicate magnets issue
+ env:
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python script/github-find-top-duplicated-bugs.py \
- --github-token ${{ secrets.GITHUB_TOKEN }} \
+ --github-token "$GITHUB_TOKEN" \
--issue-number 46355
diff --git a/Cargo.lock b/Cargo.lock
index 934e0d1a01482d57e456057860ee45037f39d570..4dbd905beb51bda4f5ff061179d144d9cd255e9a 100644
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -76,6 +76,7 @@ dependencies = [
"clock",
"collections",
"ctor",
+ "fs",
"futures 0.3.31",
"gpui",
"indoc",
@@ -169,7 +170,7 @@ dependencies = [
"context_server",
"ctor",
"db",
- "derive_more 0.99.20",
+ "derive_more",
"editor",
"env_logger 0.11.8",
"eval_utils",
@@ -241,7 +242,7 @@ dependencies = [
"anyhow",
"async-broadcast",
"async-trait",
- "derive_more 2.0.1",
+ "derive_more",
"futures 0.3.31",
"log",
"serde",
@@ -255,7 +256,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44bc1fef9c32f03bce2ab44af35b6f483bfd169bf55cc59beeb2e3b1a00ae4d1"
dependencies = [
"anyhow",
- "derive_more 2.0.1",
+ "derive_more",
"schemars",
"serde",
"serde_json",
@@ -368,6 +369,7 @@ dependencies = [
"fs",
"futures 0.3.31",
"fuzzy",
+ "git",
"gpui",
"gpui_tokio",
"html_to_markdown",
@@ -601,6 +603,17 @@ version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4b46cbb362ab8752921c97e041f5e366ee6297bd428a31275b9fcf1e380f7299"
+[[package]]
+name = "annotate-snippets"
+version = "0.12.12"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c86cd1c51b95d71dde52bca69ed225008f6ff4c8cc825b08042aa1ef823e1980"
+dependencies = [
+ "anstyle",
+ "memchr",
+ "unicode-width",
+]
+
[[package]]
name = "anstream"
version = "0.6.21"
@@ -692,6 +705,15 @@ dependencies = [
"num-traits",
]
+[[package]]
+name = "ar_archive_writer"
+version = "0.5.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7eb93bbb63b9c227414f6eb3a0adfddca591a8ce1e9b60661bb08969b87e340b"
+dependencies = [
+ "object 0.37.3",
+]
+
[[package]]
name = "arbitrary"
version = "1.4.2"
@@ -756,19 +778,16 @@ dependencies = [
[[package]]
name = "ashpd"
-version = "0.12.1"
+version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "618a409b91d5265798a99e3d1d0b226911605e581c4e7255e83c1e397b172bce"
+checksum = "0848bedd08067dca1c02c31cbb371a94ad4f2f8a61a82f2c43d96ec36a395244"
dependencies = [
- "async-fs",
- "async-net",
"enumflags2",
"futures-channel",
"futures-util",
- "rand 0.9.2",
+ "getrandom 0.4.1",
"serde",
"serde_repr",
- "url",
"wayland-backend",
"wayland-client",
"wayland-protocols",
@@ -807,7 +826,7 @@ dependencies = [
"anyhow",
"async-trait",
"collections",
- "derive_more 0.99.20",
+ "derive_more",
"extension",
"futures 0.3.31",
"gpui",
@@ -1005,7 +1024,7 @@ version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8034a681df4aed8b8edbd7fbe472401ecf009251c8b40556b304567052e294c5"
dependencies = [
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"blocking",
"futures-lite 2.6.1",
]
@@ -1019,7 +1038,7 @@ dependencies = [
"async-channel 2.5.0",
"async-executor",
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"blocking",
"futures-lite 2.6.1",
"once_cell",
@@ -1054,9 +1073,9 @@ dependencies = [
[[package]]
name = "async-lock"
-version = "3.4.1"
+version = "3.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "5fd03604047cee9b6ce9de9f70c6cd540a0520c813cbd49bae61f33ab80ed1dc"
+checksum = "290f7f2596bd5b78a9fec8088ccd89180d7f9f55b94b0576823bbbdc72ee8311"
dependencies = [
"event-listener 5.4.1",
"event-listener-strategy",
@@ -1091,7 +1110,7 @@ checksum = "fc50921ec0055cdd8a16de48773bfeec5c972598674347252c0399676be7da75"
dependencies = [
"async-channel 2.5.0",
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"async-signal",
"async-task",
"blocking",
@@ -1119,7 +1138,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43c070bbf59cd3570b6b2dd54cd772527c7c3620fce8be898406dd3ed6adc64c"
dependencies = [
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"atomic-waker",
"cfg-if",
"futures-core",
@@ -1140,7 +1159,7 @@ dependencies = [
"async-channel 1.9.0",
"async-global-executor",
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"async-process",
"crossbeam-utils",
"futures-channel",
@@ -1345,6 +1364,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"log",
+ "scopeguard",
"simplelog",
"tempfile",
"windows 0.61.3",
@@ -2166,6 +2186,16 @@ dependencies = [
"piper",
]
+[[package]]
+name = "bmrng"
+version = "0.5.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d54df9073108f1558f90ae6c5bf5ab9c917c4185f5527b280c87a993cbead0ac"
+dependencies = [
+ "futures-core",
+ "tokio",
+]
+
[[package]]
name = "bon"
version = "3.8.2"
@@ -2748,6 +2778,16 @@ dependencies = [
"target-lexicon 0.12.16",
]
+[[package]]
+name = "cfg-expr"
+version = "0.20.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "78cef5b5a1a6827c7322ae2a636368a573006b27cfa76c7ebd53e834daeaab6a"
+dependencies = [
+ "smallvec",
+ "target-lexicon 0.13.3",
+]
+
[[package]]
name = "cfg-if"
version = "1.0.4"
@@ -2973,7 +3013,7 @@ dependencies = [
"cloud_llm_client",
"collections",
"credentials_provider",
- "derive_more 0.99.20",
+ "derive_more",
"feature_flags",
"fs",
"futures 0.3.31",
@@ -3411,7 +3451,7 @@ name = "command_palette_hooks"
version = "0.1.0"
dependencies = [
"collections",
- "derive_more 0.99.20",
+ "derive_more",
"gpui",
"workspace",
]
@@ -3497,6 +3537,16 @@ dependencies = [
"windows-sys 0.59.0",
]
+[[package]]
+name = "console_error_panic_hook"
+version = "0.1.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a06aeb73f470f66dcdbf7223caeebb85984942f22f1adb2a088cf9668146bbbc"
+dependencies = [
+ "cfg-if",
+ "wasm-bindgen",
+]
+
[[package]]
name = "const-oid"
version = "0.9.6"
@@ -3577,15 +3627,18 @@ dependencies = [
[[package]]
name = "convert_case"
-version = "0.4.0"
+version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "6245d59a3e82a7fc217c5828a6692dbc6dfb63a0c8c90495621f7b9d79704a0e"
+checksum = "baaaa0ecca5b51987b9423ccdc971514dd8b0bb7b4060b983d3664dad3f1f89f"
+dependencies = [
+ "unicode-segmentation",
+]
[[package]]
name = "convert_case"
-version = "0.8.0"
+version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "baaaa0ecca5b51987b9423ccdc971514dd8b0bb7b4060b983d3664dad3f1f89f"
+checksum = "633458d4ef8c78b72454de2d54fd6ab2e60f9e02be22f3c6104cdc8a4e0fceb9"
dependencies = [
"unicode-segmentation",
]
@@ -4084,13 +4137,13 @@ dependencies = [
name = "crashes"
version = "0.1.0"
dependencies = [
- "bincode",
"cfg-if",
"crash-handler",
"futures 0.3.31",
"log",
"mach2 0.5.0",
"minidumper",
+ "parking_lot",
"paths",
"release_channel",
"serde",
@@ -4278,7 +4331,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3"
dependencies = [
"generic-array",
- "rand_core 0.6.4",
"typenum",
]
@@ -4305,6 +4357,20 @@ dependencies = [
"syn 2.0.106",
]
+[[package]]
+name = "csv_preview"
+version = "0.1.0"
+dependencies = [
+ "anyhow",
+ "editor",
+ "feature_flags",
+ "gpui",
+ "log",
+ "text",
+ "ui",
+ "workspace",
+]
+
[[package]]
name = "ctor"
version = "0.4.3"
@@ -4643,7 +4709,6 @@ dependencies = [
"sysinfo 0.37.2",
"task",
"tasks_ui",
- "telemetry",
"terminal_view",
"text",
"theme",
@@ -4743,34 +4808,23 @@ dependencies = [
[[package]]
name = "derive_more"
-version = "0.99.20"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "6edb4b64a43d977b8e99788fe3a04d483834fba1215a7e02caa415b626497f7f"
-dependencies = [
- "convert_case 0.4.0",
- "proc-macro2",
- "quote",
- "rustc_version",
- "syn 2.0.106",
-]
-
-[[package]]
-name = "derive_more"
-version = "2.0.1"
+version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "093242cf7570c207c83073cf82f79706fe7b8317e98620a47d5be7c3d8497678"
+checksum = "d751e9e49156b02b44f9c1815bcb94b984cdcc4396ecc32521c739452808b134"
dependencies = [
"derive_more-impl",
]
[[package]]
name = "derive_more-impl"
-version = "2.0.1"
+version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "bda628edc44c4bb645fbe0f758797143e4e07926f7ebf4e9bdfbd3d2ce621df3"
+checksum = "799a97264921d8623a957f6c3b9011f3b5492f557bbb7a5a19b7fa6d06ba8dcb"
dependencies = [
+ "convert_case 0.10.0",
"proc-macro2",
"quote",
+ "rustc_version",
"syn 2.0.106",
"unicode-xid",
]
@@ -4966,11 +5020,13 @@ checksum = "bd0c93bb4b0c6d9b77f4435b0ae98c24d17f1c45b2ff844c6151a07256ca923b"
[[package]]
name = "dispatch2"
-version = "0.3.0"
+version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "89a09f22a6c6069a18470eb92d2298acf25463f14256d24778e1230d789a2aec"
+checksum = "1e0e367e4e7da84520dedcac1901e4da967309406d1e51017ae1abfb97adbd38"
dependencies = [
"bitflags 2.10.0",
+ "block2",
+ "libc",
"objc2",
]
@@ -5367,7 +5423,6 @@ dependencies = [
"semver",
"serde_json",
"settings",
- "supermaven",
"telemetry",
"text",
"theme",
@@ -6247,6 +6302,12 @@ version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
+[[package]]
+name = "fixedbitset"
+version = "0.5.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1d674e81391d1e1ab681a28d99df07927c6d4aa5b027d7da16ba32d1d21ecd99"
+
[[package]]
name = "flate2"
version = "1.1.8"
@@ -6596,6 +6657,19 @@ dependencies = [
"futures-sink",
]
+[[package]]
+name = "futures-concurrency"
+version = "7.7.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "175cd8cca9e1d45b87f18ffa75088f2099e3c4fe5e2f83e42de112560bea8ea6"
+dependencies = [
+ "fixedbitset 0.5.7",
+ "futures-core",
+ "futures-lite 2.6.1",
+ "pin-project",
+ "smallvec",
+]
+
[[package]]
name = "futures-core"
version = "0.3.31"
@@ -7040,13 +7114,26 @@ dependencies = [
"wasm-bindgen",
]
+[[package]]
+name = "getrandom"
+version = "0.4.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "139ef39800118c7683f2fd3c98c1b23c09ae076556b435f8e9064ae108aaeeec"
+dependencies = [
+ "cfg-if",
+ "libc",
+ "r-efi",
+ "wasip2",
+ "wasip3",
+]
+
[[package]]
name = "gh-workflow"
version = "0.8.0"
source = "git+https://github.com/zed-industries/gh-workflow?rev=c9eac0ed361583e1072860d96776fa52775b82ac#c9eac0ed361583e1072860d96776fa52775b82ac"
dependencies = [
"async-trait",
- "derive_more 2.0.1",
+ "derive_more",
"derive_setters",
"gh-workflow-macros",
"indexmap",
@@ -7094,6 +7181,19 @@ version = "0.32.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e629b9b98ef3dd8afe6ca2bd0f89306cec16d43d907889945bc5d6687f2f13c7"
+[[package]]
+name = "gio-sys"
+version = "0.21.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0071fe88dba8e40086c8ff9bbb62622999f49628344b1d1bf490a48a29d80f22"
+dependencies = [
+ "glib-sys",
+ "gobject-sys",
+ "libc",
+ "system-deps 7.0.7",
+ "windows-sys 0.61.2",
+]
+
[[package]]
name = "git"
version = "0.1.0"
@@ -7102,7 +7202,7 @@ dependencies = [
"askpass",
"async-trait",
"collections",
- "derive_more 0.99.20",
+ "derive_more",
"futures 0.3.31",
"git2",
"gpui",
@@ -7208,6 +7308,7 @@ dependencies = [
"ctor",
"db",
"editor",
+ "feature_flags",
"futures 0.3.31",
"fuzzy",
"git",
@@ -7228,6 +7329,7 @@ dependencies = [
"pretty_assertions",
"project",
"prompt_store",
+ "proto",
"rand 0.9.2",
"remote",
"remote_connection",
@@ -7267,6 +7369,50 @@ dependencies = [
"xml-rs",
]
+[[package]]
+name = "glib"
+version = "0.21.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "16de123c2e6c90ce3b573b7330de19be649080ec612033d397d72da265f1bd8b"
+dependencies = [
+ "bitflags 2.10.0",
+ "futures-channel",
+ "futures-core",
+ "futures-executor",
+ "futures-task",
+ "futures-util",
+ "gio-sys",
+ "glib-macros",
+ "glib-sys",
+ "gobject-sys",
+ "libc",
+ "memchr",
+ "smallvec",
+]
+
+[[package]]
+name = "glib-macros"
+version = "0.21.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "cf59b675301228a696fe01c3073974643365080a76cc3ed5bc2cbc466ad87f17"
+dependencies = [
+ "heck 0.5.0",
+ "proc-macro-crate",
+ "proc-macro2",
+ "quote",
+ "syn 2.0.106",
+]
+
+[[package]]
+name = "glib-sys"
+version = "0.21.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2d95e1a3a19ae464a7286e14af9a90683c64d70c02532d88d87ce95056af3e6c"
+dependencies = [
+ "libc",
+ "system-deps 7.0.7",
+]
+
[[package]]
name = "glob"
version = "0.3.3"
@@ -7342,6 +7488,17 @@ dependencies = [
"workspace",
]
+[[package]]
+name = "gobject-sys"
+version = "0.21.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2dca35da0d19a18f4575f3cb99fe1c9e029a2941af5662f326f738a21edaf294"
+dependencies = [
+ "glib-sys",
+ "libc",
+ "system-deps 7.0.7",
+]
+
[[package]]
name = "goblin"
version = "0.8.2"
@@ -7406,6 +7563,7 @@ name = "gpui"
version = "0.2.2"
dependencies = [
"anyhow",
+ "async-channel 2.5.0",
"async-task",
"backtrace",
"bindgen 0.71.1",
@@ -7423,14 +7581,18 @@ dependencies = [
"core-text",
"core-video",
"ctor",
- "derive_more 0.99.20",
+ "derive_more",
"embed-resource",
"env_logger 0.11.8",
"etagere",
"foreign-types 0.5.0",
"futures 0.3.31",
+ "futures-concurrency",
+ "getrandom 0.3.4",
"gpui_macros",
"gpui_platform",
+ "gpui_util",
+ "gpui_web",
"http_client",
"image",
"inventory",
@@ -7440,7 +7602,7 @@ dependencies = [
"mach2 0.5.0",
"media",
"metal",
- "naga",
+ "naga 28.0.0",
"num_cpus",
"objc",
"objc2",
@@ -7449,6 +7611,7 @@ dependencies = [
"parking_lot",
"pathfinder_geometry",
"pin-project",
+ "pollster 0.4.0",
"postage",
"pretty_assertions",
"profiling",
@@ -7464,7 +7627,6 @@ dependencies = [
"serde_json",
"slotmap",
"smallvec",
- "smol",
"spin 0.10.0",
"stacksafe",
"strum 0.27.2",
@@ -7472,11 +7634,13 @@ dependencies = [
"taffy",
"thiserror 2.0.17",
"unicode-segmentation",
+ "url",
"usvg",
- "util",
"util_macros",
"uuid",
"waker-fn",
+ "wasm-bindgen",
+ "web-time",
"windows 0.61.3",
"zed-font-kit",
"zed-scap",
@@ -7494,7 +7658,6 @@ dependencies = [
"calloop",
"calloop-wayland-source",
"collections",
- "cosmic-text",
"filedescriptor",
"futures 0.3.31",
"gpui",
@@ -7507,12 +7670,14 @@ dependencies = [
"open",
"parking_lot",
"pathfinder_geometry",
+ "pollster 0.4.0",
"profiling",
"raw-window-handle",
"smallvec",
"smol",
"strum 0.27.2",
"swash",
+ "url",
"util",
"uuid",
"wayland-backend",
@@ -7524,7 +7689,6 @@ dependencies = [
"x11-clipboard",
"x11rb",
"xkbcommon",
- "zed-font-kit",
"zed-scap",
"zed-xim",
]
@@ -7535,7 +7699,6 @@ version = "0.1.0"
dependencies = [
"anyhow",
"async-task",
- "bindgen 0.71.1",
"block",
"cbindgen",
"cocoa 0.26.0",
@@ -7546,7 +7709,8 @@ dependencies = [
"core-text",
"core-video",
"ctor",
- "derive_more 0.99.20",
+ "derive_more",
+ "dispatch2",
"etagere",
"foreign-types 0.5.0",
"futures 0.3.31",
@@ -7585,9 +7749,11 @@ dependencies = [
name = "gpui_platform"
version = "0.1.0"
dependencies = [
+ "console_error_panic_hook",
"gpui",
"gpui_linux",
"gpui_macos",
+ "gpui_web",
"gpui_windows",
]
@@ -7601,6 +7767,37 @@ dependencies = [
"util",
]
+[[package]]
+name = "gpui_util"
+version = "0.1.0"
+dependencies = [
+ "anyhow",
+ "log",
+]
+
+[[package]]
+name = "gpui_web"
+version = "0.1.0"
+dependencies = [
+ "anyhow",
+ "console_error_panic_hook",
+ "futures 0.3.31",
+ "gpui",
+ "gpui_wgpu",
+ "http_client",
+ "js-sys",
+ "log",
+ "parking_lot",
+ "raw-window-handle",
+ "smallvec",
+ "uuid",
+ "wasm-bindgen",
+ "wasm-bindgen-futures",
+ "wasm_thread",
+ "web-sys",
+ "web-time",
+]
+
[[package]]
name = "gpui_wgpu"
version = "0.1.0"
@@ -7608,15 +7805,24 @@ dependencies = [
"anyhow",
"bytemuck",
"collections",
+ "cosmic-text",
"etagere",
"gpui",
+ "gpui_util",
+ "itertools 0.14.0",
+ "js-sys",
"log",
"parking_lot",
+ "pollster 0.4.0",
"profiling",
"raw-window-handle",
- "smol",
- "util",
+ "smallvec",
+ "swash",
+ "wasm-bindgen",
+ "wasm-bindgen-futures",
+ "web-sys",
"wgpu",
+ "zed-font-kit",
]
[[package]]
@@ -8061,7 +8267,7 @@ dependencies = [
"async-fs",
"async-tar",
"bytes 1.11.1",
- "derive_more 0.99.20",
+ "derive_more",
"futures 0.3.31",
"http 1.3.1",
"http-body 1.0.1",
@@ -8837,9 +9043,9 @@ dependencies = [
[[package]]
name = "js-sys"
-version = "0.3.81"
+version = "0.3.90"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "ec48937a97411dcb524a265206ccd4c90bb711fca92b2792c407f268825b9305"
+checksum = "14dc6f6450b3f6d4ed5b16327f38fed626d375a886159ca555bd7822c0c3a5a6"
dependencies = [
"once_cell",
"wasm-bindgen",
@@ -8938,9 +9144,9 @@ dependencies = [
[[package]]
name = "jupyter-protocol"
-version = "1.2.1"
+version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "8c75a69caf8b8e781224badfb76c4a8da4d49856de36ce72ae3cf5d4a1c94e42"
+checksum = "4649647741f9794a7a02e3be976f1b248ba28a37dbfc626d5089316fd4fbf4c8"
dependencies = [
"async-trait",
"bytes 1.11.1",
@@ -9271,6 +9477,7 @@ dependencies = [
"open_path_prompt",
"picker",
"project",
+ "serde_json",
"settings",
"ui",
"util",
@@ -9511,10 +9718,11 @@ dependencies = [
[[package]]
name = "libwebrtc"
-version = "0.3.10"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+version = "0.3.26"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
"cxx",
+ "glib",
"jni",
"js-sys",
"lazy_static",
@@ -9608,9 +9816,12 @@ checksum = "11d3d7f243d5c5a8b9bb5d6dd2b1602c0cb0b9db1621bafc7ed66e35ff9fe092"
[[package]]
name = "livekit"
-version = "0.7.8"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+version = "0.7.32"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
+ "base64 0.22.1",
+ "bmrng",
+ "bytes 1.11.1",
"chrono",
"futures-util",
"lazy_static",
@@ -9631,11 +9842,12 @@ dependencies = [
[[package]]
name = "livekit-api"
-version = "0.4.2"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+version = "0.4.14"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
+ "base64 0.21.7",
"futures-util",
- "http 0.2.12",
+ "http 1.3.1",
"livekit-protocol",
"livekit-runtime",
"log",
@@ -9643,20 +9855,22 @@ dependencies = [
"pbjson-types",
"prost 0.12.6",
"rand 0.9.2",
- "reqwest 0.11.27",
+ "reqwest 0.12.24",
+ "rustls-native-certs 0.6.3",
"scopeguard",
"serde",
"sha2",
"thiserror 1.0.69",
"tokio",
- "tokio-tungstenite 0.26.2",
+ "tokio-rustls 0.26.2",
+ "tokio-tungstenite 0.28.0",
"url",
]
[[package]]
name = "livekit-protocol"
-version = "0.3.9"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+version = "0.7.1"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
"futures-util",
"livekit-runtime",
@@ -9664,7 +9878,6 @@ dependencies = [
"pbjson",
"pbjson-types",
"prost 0.12.6",
- "prost-types 0.12.6",
"serde",
"thiserror 1.0.69",
"tokio",
@@ -9673,7 +9886,7 @@ dependencies = [
[[package]]
name = "livekit-runtime"
version = "0.4.0"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
"tokio",
"tokio-stream",
@@ -9729,7 +9942,6 @@ dependencies = [
"sha2",
"simplelog",
"smallvec",
- "tokio-tungstenite 0.26.2",
"ui",
"util",
"zed-scap",
@@ -9824,6 +10036,7 @@ dependencies = [
"ctor",
"futures 0.3.31",
"gpui",
+ "gpui_util",
"log",
"lsp-types",
"parking_lot",
@@ -9990,6 +10203,7 @@ dependencies = [
"language",
"linkify",
"log",
+ "markdown",
"markup5ever_rcdom",
"mermaid-rs-renderer",
"pretty_assertions",
@@ -10208,7 +10422,7 @@ dependencies = [
[[package]]
name = "mermaid-rs-renderer"
version = "0.2.0"
-source = "git+https://github.com/zed-industries/mermaid-rs-renderer?branch=fix-font-family-xml-escaping#d91961aa90bc7b0c09c87a13c91d48e2f05c468d"
+source = "git+https://github.com/zed-industries/mermaid-rs-renderer?rev=374db9ead5426697c6c2111151d9f246899bc638#374db9ead5426697c6c2111151d9f246899bc638"
dependencies = [
"anyhow",
"fontdb 0.16.2",
@@ -10489,17 +10703,35 @@ version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5ce46fe64a9d73be07dcbe690a38ce1b293be448fd8ce1e6c1b8062c9f72c6a"
-[[package]]
-name = "multimap"
-version = "0.10.1"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "1d87ecb2933e8aeadb3e3a02b828fed80a7528047e68b4f424523a0981a3a084"
-
[[package]]
name = "naga"
version = "28.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "618f667225063219ddfc61251087db8a9aec3c3f0950c916b614e403486f1135"
+dependencies = [
+ "arrayvec",
+ "bit-set",
+ "bitflags 2.10.0",
+ "cfg-if",
+ "cfg_aliases 0.2.1",
+ "codespan-reporting 0.12.0",
+ "half",
+ "hashbrown 0.16.1",
+ "hexf-parse",
+ "indexmap",
+ "libm",
+ "log",
+ "num-traits",
+ "once_cell",
+ "rustc-hash 1.1.0",
+ "thiserror 2.0.17",
+ "unicode-ident",
+]
+
+[[package]]
+name = "naga"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"arrayvec",
"bit-set",
@@ -10558,9 +10790,9 @@ dependencies = [
[[package]]
name = "nbformat"
-version = "1.1.0"
+version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "b10a89a2d910233ec3fca4de359b16ebe95e833c8b2162643ef98c6053a0549d"
+checksum = "d4983a40792c45e8639f77ef8e4461c55679cbc618f4b9e83830e8c7e79c8383"
dependencies = [
"anyhow",
"chrono",
@@ -10661,7 +10893,6 @@ dependencies = [
"cfg-if",
"cfg_aliases 0.2.1",
"libc",
- "memoffset",
]
[[package]]
@@ -10847,6 +11078,22 @@ dependencies = [
"num-iter",
"num-traits",
"rand 0.8.5",
+ "smallvec",
+ "zeroize",
+]
+
+[[package]]
+name = "num-bigint-dig"
+version = "0.9.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a7f9a86e097b0d187ad0e65667c2f58b9254671e86e7dbb78036b16692eae099"
+dependencies = [
+ "libm",
+ "num-integer",
+ "num-iter",
+ "num-traits",
+ "once_cell",
+ "rand 0.9.2",
"serde",
"smallvec",
"zeroize",
@@ -11220,15 +11467,15 @@ checksum = "a4895175b425cb1f87721b59f0f286c2092bd4af812243672510e1ac53e2e0ad"
[[package]]
name = "oo7"
-version = "0.5.0"
+version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e3299dd401feaf1d45afd8fd1c0586f10fcfb22f244bb9afa942cec73503b89d"
+checksum = "78f2bfed90f1618b4b48dcad9307f25e14ae894e2949642c87c351601d62cebd"
dependencies = [
"aes",
"ashpd",
"async-fs",
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"blocking",
"cbc",
"cipher",
@@ -11236,15 +11483,15 @@ dependencies = [
"endi",
"futures-lite 2.6.1",
"futures-util",
- "getrandom 0.3.4",
+ "getrandom 0.4.1",
"hkdf",
"hmac",
"md-5",
"num",
- "num-bigint-dig",
+ "num-bigint-dig 0.9.1",
"pbkdf2 0.12.2",
- "rand 0.9.2",
"serde",
+ "serde_bytes",
"sha2",
"subtle",
"zbus",
@@ -12224,7 +12471,7 @@ version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4c5cc86750666a3ed20bdaf5ca2a0344f9c67674cae0515bec2da16fbaa47db"
dependencies = [
- "fixedbitset",
+ "fixedbitset 0.4.2",
"indexmap",
]
@@ -12336,14 +12583,12 @@ name = "picker"
version = "0.1.0"
dependencies = [
"anyhow",
- "ctor",
"editor",
- "env_logger 0.11.8",
"gpui",
"menu",
"schemars",
"serde",
- "serde_json",
+ "settings",
"theme",
"ui",
"ui_input",
@@ -12449,6 +12694,7 @@ version = "0.1.0"
dependencies = [
"feature_flags",
"gpui",
+ "project",
"settings",
"smallvec",
"theme",
@@ -12545,6 +12791,12 @@ version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5da3b0203fd7ee5720aa0b5e790b591aa5d3f41c3ed2c34a3a393382198af2f7"
+[[package]]
+name = "pollster"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2f3a9f18d041e6d0e102a0a46750538147e5e8992d3b4873aaafee2520b00ce3"
+
[[package]]
name = "pori"
version = "0.0.0"
@@ -12602,7 +12854,7 @@ dependencies = [
"log",
"parking_lot",
"pin-project",
- "pollster",
+ "pollster 0.2.5",
"static_assertions",
"thiserror 1.0.69",
]
@@ -13046,7 +13298,7 @@ dependencies = [
"itertools 0.10.5",
"lazy_static",
"log",
- "multimap 0.8.3",
+ "multimap",
"petgraph",
"prost 0.9.0",
"prost-types 0.9.0",
@@ -13065,7 +13317,7 @@ dependencies = [
"heck 0.5.0",
"itertools 0.12.1",
"log",
- "multimap 0.10.1",
+ "multimap",
"once_cell",
"petgraph",
"prettyplease",
@@ -13155,10 +13407,11 @@ dependencies = [
[[package]]
name = "psm"
-version = "0.1.27"
+version = "0.1.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e66fcd288453b748497d8fb18bccc83a16b0518e3906d4b8df0a8d42d93dbb1c"
+checksum = "3852766467df634d74f0b2d7819bf8dc483a0eb2e3b0f50f756f9cfe8b0d18d8"
dependencies = [
+ "ar_archive_writer",
"cc",
]
@@ -13550,7 +13803,7 @@ dependencies = [
"rand 0.8.5",
"rand_chacha 0.3.1",
"simd_helpers",
- "system-deps",
+ "system-deps 6.2.2",
"thiserror 1.0.69",
"v_frame",
"wasm-bindgen",
@@ -14031,6 +14284,7 @@ dependencies = [
"serde",
"serde_json",
"settings",
+ "shlex",
"smol",
"telemetry",
"terminal",
@@ -14061,7 +14315,6 @@ dependencies = [
"http 0.2.12",
"http-body 0.4.6",
"hyper 0.14.32",
- "hyper-rustls 0.24.2",
"hyper-tls",
"ipnet",
"js-sys",
@@ -14071,8 +14324,6 @@ dependencies = [
"once_cell",
"percent-encoding",
"pin-project-lite",
- "rustls 0.21.12",
- "rustls-native-certs 0.6.3",
"rustls-pemfile 1.0.4",
"serde",
"serde_json",
@@ -14081,7 +14332,6 @@ dependencies = [
"system-configuration 0.5.1",
"tokio",
"tokio-native-tls",
- "tokio-rustls 0.24.1",
"tower-service",
"url",
"wasm-bindgen",
@@ -14105,16 +14355,22 @@ dependencies = [
"http-body 1.0.1",
"http-body-util",
"hyper 1.7.0",
+ "hyper-rustls 0.27.7",
"hyper-util",
"js-sys",
"log",
"percent-encoding",
"pin-project-lite",
+ "quinn",
+ "rustls 0.23.33",
+ "rustls-native-certs 0.8.2",
+ "rustls-pki-types",
"serde",
"serde_json",
"serde_urlencoded",
"sync_wrapper 1.0.2",
"tokio",
+ "tokio-rustls 0.26.2",
"tower 0.5.2",
"tower-http 0.6.6",
"tower-service",
@@ -14132,13 +14388,13 @@ dependencies = [
"bytes 1.11.1",
"futures 0.3.31",
"gpui",
+ "gpui_util",
"http_client",
"http_client_tls",
"log",
"regex",
"serde",
"tokio",
- "util",
"zed-reqwest",
]
@@ -14338,7 +14594,7 @@ checksum = "b8573f03f5883dcaebdfcf4725caa1ecb9c15b2ef50c43a07b816e06799bb12d"
dependencies = [
"const-oid",
"digest",
- "num-bigint-dig",
+ "num-bigint-dig 0.8.6",
"num-integer",
"num-traits",
"pkcs1",
@@ -14395,9 +14651,9 @@ dependencies = [
[[package]]
name = "runtimelib"
-version = "1.2.0"
+version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "d80685459e1e5fa5603182058351ae91c98ca458dfef4e85f0a37be4f7cf1e6c"
+checksum = "fa84884e45ed4a1e663120cef3fc11f14d1a2a1933776e1c31599f7bd2dd0c9e"
dependencies = [
"async-dispatcher",
"async-std",
@@ -14772,6 +15028,7 @@ dependencies = [
"futures 0.3.31",
"parking_lot",
"rand 0.9.2",
+ "web-time",
]
[[package]]
@@ -15101,6 +15358,16 @@ dependencies = [
"serde_derive",
]
+[[package]]
+name = "serde_bytes"
+version = "0.11.19"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a5d440709e79d88e51ac01c4b72fc6cb7314017bb7da9eeff678aa94c10e3ea8"
+dependencies = [
+ "serde",
+ "serde_core",
+]
+
[[package]]
name = "serde_core"
version = "1.0.228"
@@ -15292,7 +15559,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"collections",
- "derive_more 0.99.20",
+ "derive_more",
"gpui",
"log",
"schemars",
@@ -15511,22 +15778,26 @@ name = "sidebar"
version = "0.1.0"
dependencies = [
"acp_thread",
+ "agent",
+ "agent-client-protocol",
"agent_ui",
+ "assistant_text_thread",
"chrono",
"editor",
"feature_flags",
"fs",
- "fuzzy",
"gpui",
- "picker",
+ "language_model",
+ "menu",
"project",
"recent_projects",
+ "serde_json",
"settings",
"theme",
"ui",
- "ui_input",
"util",
"workspace",
+ "zed_actions",
]
[[package]]
@@ -15711,7 +15982,7 @@ dependencies = [
"async-executor",
"async-fs",
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"async-net",
"async-process",
"blocking",
@@ -16106,9 +16377,9 @@ checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
[[package]]
name = "stacker"
-version = "0.1.22"
+version = "0.1.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e1f8b29fb42aafcea4edeeb6b2f2d7ecd0d969c48b4cf0d2e64aafc471dd6e59"
+checksum = "08d74a23609d509411d10e2176dc2a4346e3b4aea2e7b1869f19fdedbc71c013"
dependencies = [
"cc",
"cfg-if",
@@ -16301,49 +16572,6 @@ dependencies = [
"ztracing",
]
-[[package]]
-name = "supermaven"
-version = "0.1.0"
-dependencies = [
- "anyhow",
- "client",
- "collections",
- "edit_prediction_types",
- "editor",
- "env_logger 0.11.8",
- "futures 0.3.31",
- "gpui",
- "http_client",
- "language",
- "log",
- "postage",
- "project",
- "serde",
- "serde_json",
- "settings",
- "smol",
- "supermaven_api",
- "text",
- "theme",
- "ui",
- "unicode-segmentation",
- "util",
-]
-
-[[package]]
-name = "supermaven_api"
-version = "0.1.0"
-dependencies = [
- "anyhow",
- "futures 0.3.31",
- "http_client",
- "paths",
- "serde",
- "serde_json",
- "smol",
- "util",
-]
-
[[package]]
name = "sval"
version = "2.15.0"
@@ -16779,13 +17007,26 @@ version = "6.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3e535eb8dded36d55ec13eddacd30dec501792ff23a0b1682c38601b8cf2349"
dependencies = [
- "cfg-expr",
+ "cfg-expr 0.15.8",
"heck 0.5.0",
"pkg-config",
"toml 0.8.23",
"version-compare",
]
+[[package]]
+name = "system-deps"
+version = "7.0.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "48c8f33736f986f16d69b6cb8b03f55ddcad5c41acc4ccc39dd88e84aa805e7f"
+dependencies = [
+ "cfg-expr 0.20.6",
+ "heck 0.5.0",
+ "pkg-config",
+ "toml 0.9.8",
+ "version-compare",
+]
+
[[package]]
name = "system-interface"
version = "0.27.3"
@@ -17105,7 +17346,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"collections",
- "derive_more 0.99.20",
+ "derive_more",
"fs",
"futures 0.3.31",
"gpui",
@@ -17292,6 +17533,7 @@ dependencies = [
"core-foundation-sys",
"sys-locale",
"time",
+ "windows 0.61.3",
]
[[package]]
@@ -17537,17 +17779,18 @@ dependencies = [
[[package]]
name = "tokio-tungstenite"
-version = "0.26.2"
+version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "7a9daff607c6d2bf6c16fd681ccb7eecc83e4e2cdc1ca067ffaadfca5de7f084"
+checksum = "d25a406cddcc431a75d3d9afc6a7c0f7428d4891dd973e4d54c56b46127bf857"
dependencies = [
"futures-util",
"log",
"rustls 0.23.33",
+ "rustls-native-certs 0.8.2",
"rustls-pki-types",
"tokio",
"tokio-rustls 0.26.2",
- "tungstenite 0.26.2",
+ "tungstenite 0.28.0",
]
[[package]]
@@ -18203,9 +18446,9 @@ dependencies = [
[[package]]
name = "tungstenite"
-version = "0.26.2"
+version = "0.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "4793cb5e56680ecbb1d843515b23b6de9a75eb04b66643e256a396d43be33c13"
+checksum = "eadc29d668c91fcc564941132e17b28a7ceb2f3ebf0b9dae3e03fd7a6748eb0d"
dependencies = [
"bytes 1.11.1",
"data-encoding",
@@ -18222,9 +18465,9 @@ dependencies = [
[[package]]
name = "tungstenite"
-version = "0.27.0"
+version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "eadc29d668c91fcc564941132e17b28a7ceb2f3ebf0b9dae3e03fd7a6748eb0d"
+checksum = "8628dcc84e5a09eb3d8423d6cb682965dea9133204e8fb3efee74c2a0c259442"
dependencies = [
"bytes 1.11.1",
"data-encoding",
@@ -18564,6 +18807,7 @@ dependencies = [
"futures-lite 1.13.0",
"git2",
"globset",
+ "gpui_util",
"indoc",
"itertools 0.14.0",
"libc",
@@ -18883,6 +19127,15 @@ dependencies = [
"wit-bindgen 0.46.0",
]
+[[package]]
+name = "wasip3"
+version = "0.4.0+wasi-0.3.0-rc-2026-01-06"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5"
+dependencies = [
+ "wit-bindgen 0.51.0",
+]
+
[[package]]
name = "wasite"
version = "0.1.0"
@@ -18891,9 +19144,9 @@ checksum = "b8dad83b4f25e74f184f64c43b150b91efe7647395b42289f38e50566d82855b"
[[package]]
name = "wasm-bindgen"
-version = "0.2.104"
+version = "0.2.113"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "c1da10c01ae9f1ae40cbfac0bac3b1e724b320abfcf52229f80b547c0d250e2d"
+checksum = "60722a937f594b7fde9adb894d7c092fc1bb6612897c46368d18e7a20208eff2"
dependencies = [
"cfg-if",
"once_cell",
@@ -18902,27 +19155,14 @@ dependencies = [
"wasm-bindgen-shared",
]
-[[package]]
-name = "wasm-bindgen-backend"
-version = "0.2.104"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "671c9a5a66f49d8a47345ab942e2cb93c7d1d0339065d4f8139c486121b43b19"
-dependencies = [
- "bumpalo",
- "log",
- "proc-macro2",
- "quote",
- "syn 2.0.106",
- "wasm-bindgen-shared",
-]
-
[[package]]
name = "wasm-bindgen-futures"
-version = "0.4.54"
+version = "0.4.63"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "7e038d41e478cc73bae0ff9b36c60cff1c98b8f38f8d7e8061e79ee63608ac5c"
+checksum = "8a89f4650b770e4521aa6573724e2aed4704372151bd0de9d16a3bbabb87441a"
dependencies = [
"cfg-if",
+ "futures-util",
"js-sys",
"once_cell",
"wasm-bindgen",
@@ -18931,9 +19171,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro"
-version = "0.2.104"
+version = "0.2.113"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "7ca60477e4c59f5f2986c50191cd972e3a50d8a95603bc9434501cf156a9a119"
+checksum = "0fac8c6395094b6b91c4af293f4c79371c163f9a6f56184d2c9a85f5a95f3950"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -18941,22 +19181,22 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
-version = "0.2.104"
+version = "0.2.113"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "9f07d2f20d4da7b26400c9f4a0511e6e0345b040694e8a75bd41d578fa4421d7"
+checksum = "ab3fabce6159dc20728033842636887e4877688ae94382766e00b180abac9d60"
dependencies = [
+ "bumpalo",
"proc-macro2",
"quote",
"syn 2.0.106",
- "wasm-bindgen-backend",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-shared"
-version = "0.2.104"
+version = "0.2.113"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "bad67dc8b2a1a6e5448428adec4c3e84c43e561d8c9ee8a9e5aabeb193ec41d1"
+checksum = "de0e091bdb824da87dc01d967388880d017a0a9bc4f3bdc0d86ee9f9336e3bb5"
dependencies = [
"unicode-ident",
]
@@ -19000,6 +19240,16 @@ dependencies = [
"wasmparser 0.229.0",
]
+[[package]]
+name = "wasm-encoder"
+version = "0.244.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "990065f2fe63003fe337b932cfb5e3b80e0b4d0f5ff650e6985b1048f62c8319"
+dependencies = [
+ "leb128fmt",
+ "wasmparser 0.244.0",
+]
+
[[package]]
name = "wasm-metadata"
version = "0.201.0"
@@ -19035,6 +19285,18 @@ dependencies = [
"wasmparser 0.227.1",
]
+[[package]]
+name = "wasm-metadata"
+version = "0.244.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bb0e353e6a2fbdc176932bbaab493762eb1255a7900fe0fea1a2f96c296cc909"
+dependencies = [
+ "anyhow",
+ "indexmap",
+ "wasm-encoder 0.244.0",
+ "wasmparser 0.244.0",
+]
+
[[package]]
name = "wasm-streams"
version = "0.4.2"
@@ -19048,6 +19310,18 @@ dependencies = [
"web-sys",
]
+[[package]]
+name = "wasm_thread"
+version = "0.3.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b7516db7f32decdadb1c3b8deb1b7d78b9df7606c5cc2f6241737c2ab3a0258e"
+dependencies = [
+ "futures 0.3.31",
+ "js-sys",
+ "wasm-bindgen",
+ "web-sys",
+]
+
[[package]]
name = "wasmparser"
version = "0.201.0"
@@ -19097,6 +19371,18 @@ dependencies = [
"serde",
]
+[[package]]
+name = "wasmparser"
+version = "0.244.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "47b807c72e1bac69382b3a6fb3dbe8ea4c0ed87ff5629b8685ae6b9a611028fe"
+dependencies = [
+ "bitflags 2.10.0",
+ "hashbrown 0.15.5",
+ "indexmap",
+ "semver",
+]
+
[[package]]
name = "wasmprinter"
version = "0.229.0"
@@ -19525,9 +19811,9 @@ dependencies = [
[[package]]
name = "web-sys"
-version = "0.3.81"
+version = "0.3.90"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "9367c417a924a74cae129e6a2ae3b47fabb1f8995595ab474029da749a8be120"
+checksum = "705eceb4ce901230f8625bd1d665128056ccbe4b7408faa625eec1ba80f59a97"
dependencies = [
"js-sys",
"wasm-bindgen",
@@ -19572,6 +19858,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"client",
+ "cloud_api_types",
"cloud_llm_client",
"futures 0.3.31",
"gpui",
@@ -19602,25 +19889,27 @@ dependencies = [
[[package]]
name = "webrtc-sys"
-version = "0.3.7"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+version = "0.3.23"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
"cc",
"cxx",
"cxx-build",
"glob",
"log",
+ "pkg-config",
"webrtc-sys-build",
]
[[package]]
name = "webrtc-sys-build"
-version = "0.3.6"
-source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=5f04705ac3f356350ae31534ffbc476abc9ea83d#5f04705ac3f356350ae31534ffbc476abc9ea83d"
+version = "0.3.13"
+source = "git+https://github.com/zed-industries/livekit-rust-sdks?rev=37835f840d0070d45ac8b31cce6a6ae7aca3f459#37835f840d0070d45ac8b31cce6a6ae7aca3f459"
dependencies = [
+ "anyhow",
"fs2",
"regex",
- "reqwest 0.11.27",
+ "reqwest 0.12.24",
"scratch",
"semver",
"zip 0.6.6",
@@ -19634,9 +19923,8 @@ checksum = "a751b3277700db47d3e574514de2eced5e54dc8a5436a3bf7a0b248b2cee16f3"
[[package]]
name = "wgpu"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "f9cb534d5ffd109c7d1135f34cdae29e60eab94855a625dcfe1705f8bc7ad79f"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"arrayvec",
"bitflags 2.10.0",
@@ -19647,7 +19935,7 @@ dependencies = [
"hashbrown 0.16.1",
"js-sys",
"log",
- "naga",
+ "naga 28.0.1",
"parking_lot",
"portable-atomic",
"profiling",
@@ -19664,9 +19952,8 @@ dependencies = [
[[package]]
name = "wgpu-core"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "8bb4c8b5db5f00e56f1f08869d870a0dff7c8bc7ebc01091fec140b0cf0211a9"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"arrayvec",
"bit-set",
@@ -19678,7 +19965,7 @@ dependencies = [
"hashbrown 0.16.1",
"indexmap",
"log",
- "naga",
+ "naga 28.0.1",
"once_cell",
"parking_lot",
"portable-atomic",
@@ -19696,36 +19983,32 @@ dependencies = [
[[package]]
name = "wgpu-core-deps-apple"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "87b7b696b918f337c486bf93142454080a32a37832ba8a31e4f48221890047da"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"wgpu-hal",
]
[[package]]
name = "wgpu-core-deps-emscripten"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "34b251c331f84feac147de3c4aa3aa45112622a95dd7ee1b74384fa0458dbd79"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"wgpu-hal",
]
[[package]]
name = "wgpu-core-deps-windows-linux-android"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "68ca976e72b2c9964eb243e281f6ce7f14a514e409920920dcda12ae40febaae"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"wgpu-hal",
]
[[package]]
name = "wgpu-hal"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "293080d77fdd14d6b08a67c5487dfddbf874534bb7921526db56a7b75d7e3bef"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"android_system_properties",
"arrayvec",
@@ -19748,7 +20031,7 @@ dependencies = [
"libloading",
"log",
"metal",
- "naga",
+ "naga 28.0.1",
"ndk-sys",
"objc",
"once_cell",
@@ -19771,9 +20054,8 @@ dependencies = [
[[package]]
name = "wgpu-types"
-version = "28.0.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e18308757e594ed2cd27dddbb16a139c42a683819d32a2e0b1b0167552f5840c"
+version = "28.0.1"
+source = "git+https://github.com/zed-industries/wgpu?rev=9459e95113c5bd116b2cc2c87e8424b28059e17c#9459e95113c5bd116b2cc2c87e8424b28059e17c"
dependencies = [
"bitflags 2.10.0",
"bytemuck",
@@ -20701,6 +20983,15 @@ version = "0.46.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f17a85883d4e6d00e8a97c586de764dabcc06133f7f1d55dce5cdc070ad7fe59"
+[[package]]
+name = "wit-bindgen"
+version = "0.51.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d7249219f66ced02969388cf2bb044a09756a083d0fab1e566056b04d9fbcaa5"
+dependencies = [
+ "wit-bindgen-rust-macro 0.51.0",
+]
+
[[package]]
name = "wit-bindgen-core"
version = "0.22.0"
@@ -20722,6 +21013,17 @@ dependencies = [
"wit-parser 0.227.1",
]
+[[package]]
+name = "wit-bindgen-core"
+version = "0.51.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ea61de684c3ea68cb082b7a88508a8b27fcc8b797d738bfc99a82facf1d752dc"
+dependencies = [
+ "anyhow",
+ "heck 0.5.0",
+ "wit-parser 0.244.0",
+]
+
[[package]]
name = "wit-bindgen-rt"
version = "0.22.0"
@@ -20769,6 +21071,22 @@ dependencies = [
"wit-component 0.227.1",
]
+[[package]]
+name = "wit-bindgen-rust"
+version = "0.51.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b7c566e0f4b284dd6561c786d9cb0142da491f46a9fbed79ea69cdad5db17f21"
+dependencies = [
+ "anyhow",
+ "heck 0.5.0",
+ "indexmap",
+ "prettyplease",
+ "syn 2.0.106",
+ "wasm-metadata 0.244.0",
+ "wit-bindgen-core 0.51.0",
+ "wit-component 0.244.0",
+]
+
[[package]]
name = "wit-bindgen-rust-macro"
version = "0.22.0"
@@ -20798,6 +21116,21 @@ dependencies = [
"wit-bindgen-rust 0.41.0",
]
+[[package]]
+name = "wit-bindgen-rust-macro"
+version = "0.51.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0c0f9bfd77e6a48eccf51359e3ae77140a7f50b1e2ebfe62422d8afdaffab17a"
+dependencies = [
+ "anyhow",
+ "prettyplease",
+ "proc-macro2",
+ "quote",
+ "syn 2.0.106",
+ "wit-bindgen-core 0.51.0",
+ "wit-bindgen-rust 0.51.0",
+]
+
[[package]]
name = "wit-component"
version = "0.201.0"
@@ -20836,6 +21169,25 @@ dependencies = [
"wit-parser 0.227.1",
]
+[[package]]
+name = "wit-component"
+version = "0.244.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9d66ea20e9553b30172b5e831994e35fbde2d165325bec84fc43dbf6f4eb9cb2"
+dependencies = [
+ "anyhow",
+ "bitflags 2.10.0",
+ "indexmap",
+ "log",
+ "serde",
+ "serde_derive",
+ "serde_json",
+ "wasm-encoder 0.244.0",
+ "wasm-metadata 0.244.0",
+ "wasmparser 0.244.0",
+ "wit-parser 0.244.0",
+]
+
[[package]]
name = "wit-parser"
version = "0.201.0"
@@ -20890,6 +21242,24 @@ dependencies = [
"wasmparser 0.229.0",
]
+[[package]]
+name = "wit-parser"
+version = "0.244.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ecc8ac4bc1dc3381b7f59c34f00b67e18f910c2c0f50015669dde7def656a736"
+dependencies = [
+ "anyhow",
+ "id-arena",
+ "indexmap",
+ "log",
+ "semver",
+ "serde",
+ "serde_derive",
+ "serde_json",
+ "unicode-xid",
+ "wasmparser 0.244.0",
+]
+
[[package]]
name = "witx"
version = "0.9.1"
@@ -20909,7 +21279,6 @@ dependencies = [
"any_vec",
"anyhow",
"async-recursion",
- "call",
"chrono",
"client",
"clock",
@@ -21174,6 +21543,7 @@ checksum = "ec7a2a501ed189703dba8b08142f057e887dfc4b2cc4db2d343ac6376ba3e0b9"
name = "xtask"
version = "0.1.0"
dependencies = [
+ "annotate-snippets",
"anyhow",
"backtrace",
"cargo_metadata",
@@ -21182,8 +21552,12 @@ dependencies = [
"gh-workflow",
"indexmap",
"indoc",
+ "itertools 0.14.0",
+ "regex",
"serde",
"serde_json",
+ "serde_yaml",
+ "strum 0.27.2",
"toml 0.8.23",
"toml_edit 0.22.27",
]
@@ -21301,14 +21675,14 @@ dependencies = [
[[package]]
name = "zbus"
-version = "5.12.0"
+version = "5.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "b622b18155f7a93d1cd2dc8c01d2d6a44e08fb9ebb7b3f9e6ed101488bad6c91"
+checksum = "1bfeff997a0aaa3eb20c4652baf788d2dfa6d2839a0ead0b3ff69ce2f9c4bdd1"
dependencies = [
"async-broadcast",
"async-executor",
"async-io",
- "async-lock 3.4.1",
+ "async-lock 3.4.2",
"async-process",
"async-recursion",
"async-task",
@@ -21319,8 +21693,9 @@ dependencies = [
"futures-core",
"futures-lite 2.6.1",
"hex",
- "nix 0.30.1",
+ "libc",
"ordered-stream",
+ "rustix 1.1.2",
"serde",
"serde_repr",
"tracing",
@@ -21335,9 +21710,9 @@ dependencies = [
[[package]]
name = "zbus_macros"
-version = "5.12.0"
+version = "5.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "1cdb94821ca8a87ca9c298b5d1cbd80e2a8b67115d99f6e4551ac49e42b6a314"
+checksum = "0bbd5a90dbe8feee5b13def448427ae314ccd26a49cac47905cafefb9ff846f1"
dependencies = [
"proc-macro-crate",
"proc-macro2",
@@ -21350,19 +21725,18 @@ dependencies = [
[[package]]
name = "zbus_names"
-version = "4.2.0"
+version = "4.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "7be68e64bf6ce8db94f63e72f0c7eb9a60d733f7e0499e628dfab0f84d6bcb97"
+checksum = "ffd8af6d5b78619bab301ff3c560a5bd22426150253db278f164d6cf3b72c50f"
dependencies = [
"serde",
- "static_assertions",
"winnow",
"zvariant",
]
[[package]]
name = "zed"
-version = "0.226.0"
+version = "0.228.0"
dependencies = [
"acp_thread",
"acp_tools",
@@ -21380,7 +21754,6 @@ dependencies = [
"audio",
"auto_update",
"auto_update_ui",
- "bincode",
"breadcrumbs",
"call",
"channel",
@@ -21399,6 +21772,7 @@ dependencies = [
"copilot_chat",
"copilot_ui",
"crashes",
+ "csv_preview",
"dap",
"dap_adapters",
"db",
@@ -21463,6 +21837,7 @@ dependencies = [
"parking_lot",
"paths",
"picker",
+ "pkg-config",
"pretty_assertions",
"profiling",
"project",
@@ -21490,7 +21865,6 @@ dependencies = [
"smol",
"snippet_provider",
"snippets_ui",
- "supermaven",
"svg_preview",
"sysinfo 0.37.2",
"system_specs",
@@ -21979,14 +22353,14 @@ dependencies = [
[[package]]
name = "zvariant"
-version = "5.8.0"
+version = "5.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "2be61892e4f2b1772727be11630a62664a1826b62efa43a6fe7449521cb8744c"
+checksum = "68b64ef4f40c7951337ddc7023dd03528a57a3ce3408ee9da5e948bd29b232c4"
dependencies = [
"endi",
"enumflags2",
"serde",
- "url",
+ "serde_bytes",
"winnow",
"zvariant_derive",
"zvariant_utils",
@@ -21994,9 +22368,9 @@ dependencies = [
[[package]]
name = "zvariant_derive"
-version = "5.8.0"
+version = "5.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "da58575a1b2b20766513b1ec59d8e2e68db2745379f961f86650655e862d2006"
+checksum = "484d5d975eb7afb52cc6b929c13d3719a20ad650fea4120e6310de3fc55e415c"
dependencies = [
"proc-macro-crate",
"proc-macro2",
@@ -22007,9 +22381,9 @@ dependencies = [
[[package]]
name = "zvariant_utils"
-version = "3.2.1"
+version = "3.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "c6949d142f89f6916deca2232cf26a8afacf2b9fdc35ce766105e104478be599"
+checksum = "f75c23a64ef8f40f13a6989991e643554d9bef1d682a281160cf0c1bc389c5e9"
dependencies = [
"proc-macro2",
"quote",
diff --git a/Cargo.toml b/Cargo.toml
index 49b765c512accc3a19662da41520061479b8cc44..b8e57bda7e46ea45451fedd6759268235c7d71ab 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,8 +1,8 @@
[workspace]
resolver = "2"
members = [
- "crates/acp_tools",
"crates/acp_thread",
+ "crates/acp_tools",
"crates/action_log",
"crates/activity_indicator",
"crates/agent",
@@ -13,9 +13,9 @@ members = [
"crates/anthropic",
"crates/askpass",
"crates/assets",
- "crates/assistant_text_thread",
"crates/assistant_slash_command",
"crates/assistant_slash_commands",
+ "crates/assistant_text_thread",
"crates/audio",
"crates/auto_update",
"crates/auto_update_helper",
@@ -32,6 +32,7 @@ members = [
"crates/cloud_api_client",
"crates/cloud_api_types",
"crates/cloud_llm_client",
+ "crates/codestral",
"crates/collab",
"crates/collab_ui",
"crates/collections",
@@ -44,6 +45,7 @@ members = [
"crates/copilot_chat",
"crates/crashes",
"crates/credentials_provider",
+ "crates/csv_preview",
"crates/dap",
"crates/dap_adapters",
"crates/db",
@@ -56,9 +58,10 @@ members = [
"crates/diagnostics",
"crates/docs_preprocessor",
"crates/edit_prediction",
+ "crates/edit_prediction_cli",
+ "crates/edit_prediction_context",
"crates/edit_prediction_types",
"crates/edit_prediction_ui",
- "crates/edit_prediction_context",
"crates/editor",
"crates/encoding_selector",
"crates/etw_tracing",
@@ -88,9 +91,11 @@ members = [
"crates/gpui_macos",
"crates/gpui_macros",
"crates/gpui_platform",
+ "crates/gpui_tokio",
+ "crates/gpui_util",
+ "crates/gpui_web",
"crates/gpui_wgpu",
"crates/gpui_windows",
- "crates/gpui_tokio",
"crates/html_to_markdown",
"crates/http_client",
"crates/http_client_tls",
@@ -119,8 +124,8 @@ members = [
"crates/media",
"crates/menu",
"crates/migrator",
- "crates/mistral",
"crates/miniprofiler_ui",
+ "crates/mistral",
"crates/multi_buffer",
"crates/nc",
"crates/net",
@@ -136,6 +141,7 @@ members = [
"crates/panel",
"crates/paths",
"crates/picker",
+ "crates/platform_title_bar",
"crates/prettier",
"crates/project",
"crates/project_benchmarks",
@@ -147,7 +153,6 @@ members = [
"crates/refineable",
"crates/refineable/derive_refineable",
"crates/release_channel",
- "crates/scheduler",
"crates/remote",
"crates/remote_connection",
"crates/remote_server",
@@ -157,10 +162,10 @@ members = [
"crates/rope",
"crates/rpc",
"crates/rules_library",
+ "crates/scheduler",
"crates/schema_generator",
"crates/search",
"crates/session",
- "crates/sidebar",
"crates/settings",
"crates/settings_content",
"crates/settings_json",
@@ -168,6 +173,7 @@ members = [
"crates/settings_profile_selector",
"crates/settings_ui",
"crates/shell_command_parser",
+ "crates/sidebar",
"crates/snippet",
"crates/snippet_provider",
"crates/snippets_ui",
@@ -177,9 +183,6 @@ members = [
"crates/storybook",
"crates/streaming_diff",
"crates/sum_tree",
- "crates/supermaven",
- "crates/supermaven_api",
- "crates/codestral",
"crates/svg_preview",
"crates/system_specs",
"crates/tab_switcher",
@@ -195,7 +198,6 @@ members = [
"crates/theme_importer",
"crates/theme_selector",
"crates/time_format",
- "crates/platform_title_bar",
"crates/title_bar",
"crates/toolchain_selector",
"crates/ui",
@@ -207,10 +209,10 @@ members = [
"crates/vercel",
"crates/vim",
"crates/vim_mode_setting",
- "crates/which_key",
"crates/watch",
"crates/web_search",
"crates/web_search_providers",
+ "crates/which_key",
"crates/workspace",
"crates/worktree",
"crates/worktree_benchmarks",
@@ -218,7 +220,6 @@ members = [
"crates/zed",
"crates/zed_actions",
"crates/zed_env_vars",
- "crates/edit_prediction_cli",
"crates/zeta_prompt",
"crates/zlog",
"crates/zlog_settings",
@@ -298,6 +299,7 @@ copilot_ui = { path = "crates/copilot_ui" }
crashes = { path = "crates/crashes" }
credentials_provider = { path = "crates/credentials_provider" }
crossbeam = "0.8.4"
+csv_preview = { path = "crates/csv_preview"}
dap = { path = "crates/dap" }
dap_adapters = { path = "crates/dap_adapters" }
db = { path = "crates/db" }
@@ -332,9 +334,11 @@ gpui_linux = { path = "crates/gpui_linux", default-features = false }
gpui_macos = { path = "crates/gpui_macos", default-features = false }
gpui_macros = { path = "crates/gpui_macros" }
gpui_platform = { path = "crates/gpui_platform", default-features = false }
+gpui_web = { path = "crates/gpui_web" }
gpui_wgpu = { path = "crates/gpui_wgpu" }
gpui_windows = { path = "crates/gpui_windows", default-features = false }
gpui_tokio = { path = "crates/gpui_tokio" }
+gpui_util = { path = "crates/gpui_util" }
html_to_markdown = { path = "crates/html_to_markdown" }
http_client = { path = "crates/http_client" }
http_client_tls = { path = "crates/http_client_tls" }
@@ -366,7 +370,7 @@ markdown_preview = { path = "crates/markdown_preview" }
svg_preview = { path = "crates/svg_preview" }
media = { path = "crates/media" }
menu = { path = "crates/menu" }
-mermaid-rs-renderer = { git = "https://github.com/zed-industries/mermaid-rs-renderer", branch = "fix-font-family-xml-escaping", default-features = false }
+mermaid-rs-renderer = { git = "https://github.com/zed-industries/mermaid-rs-renderer", rev = "374db9ead5426697c6c2111151d9f246899bc638", default-features = false }
migrator = { path = "crates/migrator" }
mistral = { path = "crates/mistral" }
multi_buffer = { path = "crates/multi_buffer" }
@@ -423,8 +427,6 @@ sqlez_macros = { path = "crates/sqlez_macros" }
story = { path = "crates/story" }
streaming_diff = { path = "crates/streaming_diff" }
sum_tree = { path = "crates/sum_tree" }
-supermaven = { path = "crates/supermaven" }
-supermaven_api = { path = "crates/supermaven_api" }
codestral = { path = "crates/codestral" }
system_specs = { path = "crates/system_specs" }
tab_switcher = { path = "crates/tab_switcher" }
@@ -479,9 +481,15 @@ alacritty_terminal = { git = "https://github.com/zed-industries/alacritty", rev
any_vec = "0.14"
anyhow = "1.0.86"
arrayvec = { version = "0.7.4", features = ["serde"] }
-ashpd = { version = "0.12.1", default-features = false, features = [
- "async-std",
+ashpd = { version = "0.13", default-features = false, features = [
+ "async-io",
+ "notification",
+ "open_uri",
+ "file_chooser",
+ "settings",
+ "trash"
] }
+async-channel = "2.5.0"
async-compat = "0.2.1"
async-compression = { version = "0.4", features = ["gzip", "futures-io"] }
async-dispatcher = "0.1"
@@ -530,7 +538,16 @@ criterion = { version = "0.5", features = ["html_reports"] }
ctor = "0.4.0"
dap-types = { git = "https://github.com/zed-industries/dap-types", rev = "1b461b310481d01e02b2603c16d7144b926339f8" }
dashmap = "6.0"
-derive_more = "0.99.17"
+derive_more = { version = "2.1.1", features = [
+ "add",
+ "add_assign",
+ "deref",
+ "deref_mut",
+ "from_str",
+ "mul",
+ "mul_assign",
+ "not",
+] }
dirs = "4.0"
documented = "0.9.1"
dotenvy = "0.15.0"
@@ -542,6 +559,7 @@ exec = "0.3.1"
fancy-regex = "0.16.0"
fork = "0.4.0"
futures = "0.3"
+futures-concurrency = "7.7.1"
futures-lite = "1.13"
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "c9eac0ed361583e1072860d96776fa52775b82ac" }
git2 = { version = "0.20.1", default-features = false, features = ["vendored-libgit2"] }
@@ -565,11 +583,13 @@ itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.37.0"
jsonwebtoken = "10.0"
-jupyter-protocol = "1.2.0"
+jupyter-protocol = "1.4.0"
jupyter-websocket-client = "1.0.0"
libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
linkify = "0.10.0"
+libwebrtc = "0.3.26"
+livekit = { version = "0.7.32", features = ["tokio", "rustls-tls-native-roots"] }
log = { version = "0.4.16", features = ["kv_unstable_serde", "serde"] }
lsp-types = { git = "https://github.com/zed-industries/lsp-types", rev = "a4f410987660bf560d1e617cb78117c6b6b9f599" }
mach2 = "0.5"
@@ -579,7 +599,7 @@ minidumper = "0.8"
moka = { version = "0.12.10", features = ["sync"] }
naga = { version = "28.0", features = ["wgsl-in"] }
nanoid = "0.4"
-nbformat = "1.1.0"
+nbformat = "1.2.0"
nix = "0.29"
num-format = "0.4.4"
objc = "0.2"
@@ -632,6 +652,7 @@ profiling = "1"
prost = "0.9"
prost-build = "0.9"
prost-types = "0.9"
+pollster = "0.4.0"
pulldown-cmark = { version = "0.13.0", default-features = false }
quote = "1.0.9"
rand = "0.9"
@@ -648,7 +669,7 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "c15662
"stream",
], package = "zed-reqwest", version = "0.12.15-zed" }
rsa = "0.9.6"
-runtimelib = { version = "1.2.0", default-features = false, features = [
+runtimelib = { version = "1.4.0", default-features = false, features = [
"async-dispatcher-runtime", "aws-lc-rs"
] }
rust-embed = { version = "8.4", features = ["include-exclude"] }
@@ -756,7 +777,9 @@ wasmtime = { version = "33", default-features = false, features = [
wasmtime-wasi = "33"
wax = "0.7"
which = "6.0.0"
-wgpu = "28.0"
+wasm-bindgen = "0.2.113"
+web-time = "1.1.0"
+wgpu = { git = "https://github.com/zed-industries/wgpu", rev = "9459e95113c5bd116b2cc2c87e8424b28059e17c" }
windows-core = "0.61"
yawc = "0.2.5"
zeroize = "1.8"
@@ -767,11 +790,13 @@ zstd = "0.11"
version = "0.61"
features = [
"Foundation_Numerics",
+ "Globalization_DateTimeFormatting",
"Storage_Search",
"Storage_Streams",
"System_Threading",
"UI_ViewManagement",
"Wdk_System_SystemServices",
+ "Win32_Foundation",
"Win32_Globalization",
"Win32_Graphics_Direct3D",
"Win32_Graphics_Direct3D11",
@@ -799,6 +824,7 @@ features = [
"Win32_System_Ole",
"Win32_System_Performance",
"Win32_System_Pipes",
+ "Win32_System_RestartManager",
"Win32_System_SystemInformation",
"Win32_System_SystemServices",
"Win32_System_Threading",
@@ -821,6 +847,8 @@ notify = { git = "https://github.com/zed-industries/notify.git", rev = "ce58c24c
notify-types = { git = "https://github.com/zed-industries/notify.git", rev = "ce58c24cad542c28e04ced02e20325a4ec28a31d" }
windows-capture = { git = "https://github.com/zed-industries/windows-capture.git", rev = "f0d6c1b6691db75461b732f6d5ff56eed002eeb9" }
calloop = { git = "https://github.com/zed-industries/calloop" }
+livekit = { git = "https://github.com/zed-industries/livekit-rust-sdks", rev = "37835f840d0070d45ac8b31cce6a6ae7aca3f459" }
+libwebrtc = { git = "https://github.com/zed-industries/livekit-rust-sdks", rev = "37835f840d0070d45ac8b31cce6a6ae7aca3f459" }
[profile.dev]
split-debuginfo = "unpacked"
@@ -880,7 +908,6 @@ sidebar = { codegen-units = 1 }
snippet = { codegen-units = 1 }
snippets_ui = { codegen-units = 1 }
story = { codegen-units = 1 }
-supermaven_api = { codegen-units = 1 }
telemetry_events = { codegen-units = 1 }
theme_selector = { codegen-units = 1 }
time_format = { codegen-units = 1 }
diff --git a/assets/icons/ai_vercel.svg b/assets/icons/ai_vercel.svg
new file mode 100644
index 0000000000000000000000000000000000000000..c6cc5796f724e713437c4866053380cf2e14d511
--- /dev/null
+++ b/assets/icons/ai_vercel.svg
@@ -0,0 +1,3 @@
+
diff --git a/assets/icons/fast_forward.svg b/assets/icons/fast_forward.svg
new file mode 100644
index 0000000000000000000000000000000000000000..240bc65aca3558561bb52f2f8c5e860d38596223
--- /dev/null
+++ b/assets/icons/fast_forward.svg
@@ -0,0 +1,4 @@
+
diff --git a/assets/icons/fast_forward_off.svg b/assets/icons/fast_forward_off.svg
new file mode 100644
index 0000000000000000000000000000000000000000..8ea7c41c6582b031f066f590dd425641945aadc9
--- /dev/null
+++ b/assets/icons/fast_forward_off.svg
@@ -0,0 +1,5 @@
+
diff --git a/assets/icons/file_icons/gitlab.svg b/assets/icons/file_icons/gitlab.svg
new file mode 100644
index 0000000000000000000000000000000000000000..f0faf570b125c7764e769ae60f7a6ce6f7825ceb
--- /dev/null
+++ b/assets/icons/file_icons/gitlab.svg
@@ -0,0 +1 @@
+
diff --git a/assets/icons/file_icons/helm.svg b/assets/icons/file_icons/helm.svg
new file mode 100644
index 0000000000000000000000000000000000000000..03e702f2d5081c4e96ff4db7ba7428817b08748f
--- /dev/null
+++ b/assets/icons/file_icons/helm.svg
@@ -0,0 +1 @@
+
diff --git a/assets/icons/file_icons/yaml.svg b/assets/icons/file_icons/yaml.svg
new file mode 100644
index 0000000000000000000000000000000000000000..2c3efd46cd45ff67d6c46d84476d563dd5ac3a73
--- /dev/null
+++ b/assets/icons/file_icons/yaml.svg
@@ -0,0 +1 @@
+
diff --git a/assets/icons/git_commit.svg b/assets/icons/git_commit.svg
new file mode 100644
index 0000000000000000000000000000000000000000..38b36ec7efb72275e5e6efbbe761deb54050cfe7
--- /dev/null
+++ b/assets/icons/git_commit.svg
@@ -0,0 +1,5 @@
+
diff --git a/assets/icons/git_graph.svg b/assets/icons/git_graph.svg
index 8f372a305d3fddf2901756108c83d09b31fb657e..7ae33e365d40bfccd9c48e4f7e94b10d3687f8dc 100644
--- a/assets/icons/git_graph.svg
+++ b/assets/icons/git_graph.svg
@@ -1,4 +1,7 @@
diff --git a/assets/icons/new_thread.svg b/assets/icons/new_thread.svg
new file mode 100644
index 0000000000000000000000000000000000000000..19b8fa25ea30ed47a57a5d5f83d62f2b4b56b61e
--- /dev/null
+++ b/assets/icons/new_thread.svg
@@ -0,0 +1,4 @@
+
diff --git a/assets/icons/open_folder.svg b/assets/icons/open_folder.svg
new file mode 100644
index 0000000000000000000000000000000000000000..c4aa32b29cc1048fd4ecd8b1b4d32b68ae0a8ad3
--- /dev/null
+++ b/assets/icons/open_folder.svg
@@ -0,0 +1,4 @@
+
diff --git a/assets/icons/queue_message.svg b/assets/icons/queue_message.svg
new file mode 100644
index 0000000000000000000000000000000000000000..1bdf6738bcf3143fc13a820281cf1cab8531bd36
--- /dev/null
+++ b/assets/icons/queue_message.svg
@@ -0,0 +1,7 @@
+
diff --git a/assets/keymaps/default-linux.json b/assets/keymaps/default-linux.json
index f3247e936f2b6d2d5ee5275304ea445729046afa..0b354ef1c039c2fe7dde2f20bb30ef71f067e84d 100644
--- a/assets/keymaps/default-linux.json
+++ b/assets/keymaps/default-linux.json
@@ -204,6 +204,7 @@
{
"context": "Editor && editor_agent_diff",
"bindings": {
+ "alt-y": "agent::Keep",
"ctrl-alt-y": "agent::Keep",
"ctrl-alt-z": "agent::Reject",
"shift-alt-y": "agent::KeepAll",
@@ -214,6 +215,7 @@
{
"context": "AgentDiff",
"bindings": {
+ "alt-y": "agent::Keep",
"ctrl-alt-y": "agent::Keep",
"ctrl-alt-z": "agent::Reject",
"shift-alt-y": "agent::KeepAll",
@@ -333,6 +335,7 @@
"ctrl-alt-k": "agent::ToggleThinkingMode",
"ctrl-alt-'": "agent::ToggleThinkingEffortMenu",
"ctrl-'": "agent::CycleThinkingEffort",
+ "ctrl-alt-.": "agent::ToggleFastMode",
},
},
{
@@ -670,6 +673,9 @@
"use_key_equivalents": true,
"bindings": {
"ctrl-n": "multi_workspace::NewWorkspaceInWindow",
+ "left": "agents_sidebar::CollapseSelectedEntry",
+ "right": "agents_sidebar::ExpandSelectedEntry",
+ "enter": "menu::Confirm",
},
},
{
@@ -1309,6 +1315,7 @@
"bindings": {
"ctrl-shift-space": "git::WorktreeFromDefaultOnWindow",
"ctrl-space": "git::WorktreeFromDefault",
+ "ctrl-shift-backspace": "git::DeleteWorktree",
},
},
{
diff --git a/assets/keymaps/default-macos.json b/assets/keymaps/default-macos.json
index 77e01368462cdfcce24cf1cba39d6a2a11cdcce0..052475ddb981c4db5495914096ffd72dee54d80f 100644
--- a/assets/keymaps/default-macos.json
+++ b/assets/keymaps/default-macos.json
@@ -242,6 +242,7 @@
"context": "AgentDiff",
"use_key_equivalents": true,
"bindings": {
+ "cmd-y": "agent::Keep",
"cmd-alt-y": "agent::Keep",
"cmd-alt-z": "agent::Reject",
"shift-alt-y": "agent::KeepAll",
@@ -252,6 +253,7 @@
"context": "Editor && editor_agent_diff",
"use_key_equivalents": true,
"bindings": {
+ "cmd-y": "agent::Keep",
"cmd-alt-y": "agent::Keep",
"cmd-alt-z": "agent::Reject",
"shift-alt-y": "agent::KeepAll",
@@ -377,6 +379,7 @@
"cmd-alt-k": "agent::ToggleThinkingMode",
"cmd-alt-'": "agent::ToggleThinkingEffortMenu",
"ctrl-'": "agent::CycleThinkingEffort",
+ "cmd-alt-.": "agent::ToggleFastMode",
},
},
{
@@ -447,6 +450,13 @@
"down": "search::NextHistoryQuery",
},
},
+ {
+ "context": "BufferSearchBar || ProjectSearchBar",
+ "use_key_equivalents": true,
+ "bindings": {
+ "ctrl-enter": "editor::Newline",
+ },
+ },
{
"context": "ProjectSearchBar",
"use_key_equivalents": true,
@@ -731,6 +741,9 @@
"use_key_equivalents": true,
"bindings": {
"cmd-n": "multi_workspace::NewWorkspaceInWindow",
+ "left": "agents_sidebar::CollapseSelectedEntry",
+ "right": "agents_sidebar::ExpandSelectedEntry",
+ "enter": "menu::Confirm",
},
},
{
@@ -1407,6 +1420,7 @@
"bindings": {
"ctrl-shift-space": "git::WorktreeFromDefaultOnWindow",
"ctrl-space": "git::WorktreeFromDefault",
+ "cmd-shift-backspace": "git::DeleteWorktree",
},
},
{
diff --git a/assets/keymaps/default-windows.json b/assets/keymaps/default-windows.json
index 51b221c8389d1588d80a8186ddceb68e8cb025c7..ef2b339951382a44433372b34e7e62b082428362 100644
--- a/assets/keymaps/default-windows.json
+++ b/assets/keymaps/default-windows.json
@@ -203,6 +203,7 @@
"context": "Editor && editor_agent_diff",
"use_key_equivalents": true,
"bindings": {
+ "alt-y": "agent::Keep",
"ctrl-alt-y": "agent::Keep",
"ctrl-alt-z": "agent::Reject",
"shift-alt-y": "agent::KeepAll",
@@ -214,6 +215,7 @@
"context": "AgentDiff",
"use_key_equivalents": true,
"bindings": {
+ "alt-y": "agent::Keep",
"ctrl-alt-y": "agent::Keep",
"ctrl-alt-z": "agent::Reject",
"shift-alt-y": "agent::KeepAll",
@@ -335,6 +337,7 @@
"ctrl-alt-k": "agent::ToggleThinkingMode",
"ctrl-alt-'": "agent::ToggleThinkingEffortMenu",
"ctrl-'": "agent::CycleThinkingEffort",
+ "ctrl-alt-.": "agent::ToggleFastMode",
},
},
{
@@ -674,6 +677,9 @@
"use_key_equivalents": true,
"bindings": {
"ctrl-n": "multi_workspace::NewWorkspaceInWindow",
+ "left": "agents_sidebar::CollapseSelectedEntry",
+ "right": "agents_sidebar::ExpandSelectedEntry",
+ "enter": "menu::Confirm",
},
},
{
@@ -1330,6 +1336,7 @@
"bindings": {
"ctrl-shift-space": "git::WorktreeFromDefaultOnWindow",
"ctrl-space": "git::WorktreeFromDefault",
+ "ctrl-shift-backspace": "git::DeleteWorktree",
},
},
{
diff --git a/assets/keymaps/vim.json b/assets/keymaps/vim.json
index 9832ce8fe08fe23d610a1c2ee1a95ad4c2c2574c..1f2742f982bc2165181a797e577b350f5630def9 100644
--- a/assets/keymaps/vim.json
+++ b/assets/keymaps/vim.json
@@ -1110,4 +1110,12 @@
"shift-g": "menu::SelectLast",
},
},
+ {
+ "context": "NotebookEditor > Editor && VimControl && vim_mode == normal",
+
+ "bindings": {
+ "j": "notebook::NotebookMoveDown",
+ "k": "notebook::NotebookMoveUp",
+ },
+ },
]
diff --git a/assets/settings/default.json b/assets/settings/default.json
index 0a57472a5f21657cab89bd3e6f64e259a4a220e6..0a824bbe93a0d68a23d934a63eb1fdab1e2f1b02 100644
--- a/assets/settings/default.json
+++ b/assets/settings/default.json
@@ -361,8 +361,11 @@
// bracket, brace, single or double quote characters.
// For example, when you select text and type '(', Zed will surround the text with ().
"use_auto_surround": true,
- // Whether indentation should be adjusted based on the context whilst typing.
- "auto_indent": true,
+ // Controls automatic indentation behavior when typing.
+ // - "syntax_aware": Adjusts indentation based on syntax context (default)
+ // - "preserve_indent": Preserves current line's indentation on new lines
+ // - "none": No automatic indentation
+ "auto_indent": "syntax_aware",
// Whether indentation of pasted content should be adjusted based on the context.
"auto_indent_on_paste": true,
// Controls how the editor handles the autoclosed characters.
@@ -799,6 +802,8 @@
// 3. Show files first, then directories:
// "files_first"
"sort_mode": "directories_first",
+ // Whether to show error and warning count badges next to file names in the project panel.
+ "diagnostic_badges": false,
// Whether to enable drag-and-drop operations in the project panel.
"drag_and_drop": true,
// Whether to hide the root entry when only one folder is open in the window;
@@ -913,6 +918,10 @@
// Default: inherits editor scrollbar settings
// "show": null
},
+ // Whether to show the addition/deletion change count next to each file in the Git panel.
+ //
+ // Default: false
+ "diff_stats": false,
},
"message_editor": {
// Whether to automatically replace emoji shortcodes with emoji characters.
@@ -1265,8 +1274,6 @@
//
// Default: true
"skip_focus_for_active_in_search": true,
- // Whether to show the git status in the file finder.
- "git_status": true,
// Whether to use gitignored files when searching.
// Only the file Zed had indexed will be used, not necessary all the gitignored files.
//
@@ -1827,8 +1834,8 @@
" (",
" # multi-char path: first char (not opening delimiter, space, or box drawing char)",
" [^({\\[<\"'`\\ \\u2500-\\u257F]",
- " # middle chars: non-space, and colon/paren only if not followed by digit/paren",
- " ([^\\ :(]|[:(][^0-9()])*",
+ " # middle chars: non-space, and colon/paren only if not followed by digit/paren/space",
+ " ([^\\ :(]|[:(][^0-9()\\ ])*",
" # last char: not closing delimiter or colon",
" [^()}\\]>\"'`.,;:\\ ]",
" |",
@@ -2222,6 +2229,9 @@
"vercel": {
"api_url": "https://api.v0.dev/v1",
},
+ "vercel_ai_gateway": {
+ "api_url": "https://ai-gateway.vercel.sh/v1",
+ },
"x_ai": {
"api_url": "https://api.x.ai/v1",
},
diff --git a/assets/settings/default_semantic_token_rules.json b/assets/settings/default_semantic_token_rules.json
index c5e9d1438cad583e78bc3e109b4bc79c62aa7ac5..65b20a7423aef3c3221f9f80e345fd503627d98d 100644
--- a/assets/settings/default_semantic_token_rules.json
+++ b/assets/settings/default_semantic_token_rules.json
@@ -2,7 +2,9 @@
//
// These rules map LSP semantic token types to syntax theme styles.
// To customize, add rules to "semantic_token_rules" in your settings.json.
-// User-defined rules are prepended to these defaults and take precedence.
+// User-defined rules are prepended and take highest precedence.
+// Extension language rules are applied next.
+// These built-in defaults are applied last.
//
// Each rule has the following properties:
// - `token_type`: The LSP semantic token type to match. If omitted, matches all types.
diff --git a/crates/acp_thread/src/acp_thread.rs b/crates/acp_thread/src/acp_thread.rs
index fea3236e1697e3af189da2e6a0f14d70a6f1c6f6..1b9271918884dc020986577926d9578e3a6f049c 100644
--- a/crates/acp_thread/src/acp_thread.rs
+++ b/crates/acp_thread/src/acp_thread.rs
@@ -2,55 +2,23 @@ mod connection;
mod diff;
mod mention;
mod terminal;
-
-/// Key used in ACP ToolCall meta to store the tool's programmatic name.
-/// This is a workaround since ACP's ToolCall doesn't have a dedicated name field.
-pub const TOOL_NAME_META_KEY: &str = "tool_name";
-
-/// Key used in ACP ToolCall meta to store the session id when a subagent is spawned.
-pub const SUBAGENT_SESSION_ID_META_KEY: &str = "subagent_session_id";
-
-/// Helper to extract tool name from ACP meta
-pub fn tool_name_from_meta(meta: &Option) -> Option {
- meta.as_ref()
- .and_then(|m| m.get(TOOL_NAME_META_KEY))
- .and_then(|v| v.as_str())
- .map(|s| SharedString::from(s.to_owned()))
-}
-
-/// Helper to extract subagent session id from ACP meta
-pub fn subagent_session_id_from_meta(meta: &Option) -> Option {
- meta.as_ref()
- .and_then(|m| m.get(SUBAGENT_SESSION_ID_META_KEY))
- .and_then(|v| v.as_str())
- .map(|s| acp::SessionId::from(s.to_string()))
-}
-
-/// Helper to create meta with tool name
-pub fn meta_with_tool_name(tool_name: &str) -> acp::Meta {
- acp::Meta::from_iter([(TOOL_NAME_META_KEY.into(), tool_name.into())])
-}
-use collections::HashSet;
-pub use connection::*;
-pub use diff::*;
-use language::language_settings::FormatOnSave;
-pub use mention::*;
-use project::lsp_store::{FormatTrigger, LspFormatTarget};
-use serde::{Deserialize, Serialize};
-use serde_json::to_string_pretty;
-
-use task::{Shell, ShellBuilder};
-pub use terminal::*;
-
use action_log::{ActionLog, ActionLogTelemetry};
use agent_client_protocol::{self as acp};
use anyhow::{Context as _, Result, anyhow};
+use collections::HashSet;
+pub use connection::*;
+pub use diff::*;
use futures::{FutureExt, channel::oneshot, future::BoxFuture};
use gpui::{AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task, WeakEntity};
use itertools::Itertools;
+use language::language_settings::FormatOnSave;
use language::{Anchor, Buffer, BufferSnapshot, LanguageRegistry, Point, ToPoint, text_diff};
use markdown::Markdown;
+pub use mention::*;
+use project::lsp_store::{FormatTrigger, LspFormatTarget};
use project::{AgentLocation, Project, git_store::GitStoreCheckpoint};
+use serde::{Deserialize, Serialize};
+use serde_json::to_string_pretty;
use std::collections::HashMap;
use std::error::Error;
use std::fmt::{Formatter, Write};
@@ -59,11 +27,51 @@ use std::process::ExitStatus;
use std::rc::Rc;
use std::time::{Duration, Instant};
use std::{fmt::Display, mem, path::PathBuf, sync::Arc};
+use task::{Shell, ShellBuilder};
+pub use terminal::*;
use text::Bias;
use ui::App;
use util::{ResultExt, get_default_system_shell_preferring_bash, paths::PathStyle};
use uuid::Uuid;
+/// Key used in ACP ToolCall meta to store the tool's programmatic name.
+/// This is a workaround since ACP's ToolCall doesn't have a dedicated name field.
+pub const TOOL_NAME_META_KEY: &str = "tool_name";
+
+/// Helper to extract tool name from ACP meta
+pub fn tool_name_from_meta(meta: &Option) -> Option {
+ meta.as_ref()
+ .and_then(|m| m.get(TOOL_NAME_META_KEY))
+ .and_then(|v| v.as_str())
+ .map(|s| SharedString::from(s.to_owned()))
+}
+
+/// Helper to create meta with tool name
+pub fn meta_with_tool_name(tool_name: &str) -> acp::Meta {
+ acp::Meta::from_iter([(TOOL_NAME_META_KEY.into(), tool_name.into())])
+}
+
+/// Key used in ACP ToolCall meta to store the session id and message indexes
+pub const SUBAGENT_SESSION_INFO_META_KEY: &str = "subagent_session_info";
+
+#[derive(Clone, Debug, Deserialize, Serialize)]
+pub struct SubagentSessionInfo {
+ /// The session id of the subagent sessiont that was spawned
+ pub session_id: acp::SessionId,
+ /// The index of the message of the start of the "turn" run by this tool call
+ pub message_start_index: usize,
+ /// The index of the output of the message that the subagent has returned
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub message_end_index: Option,
+}
+
+/// Helper to extract subagent session id from ACP meta
+pub fn subagent_session_info_from_meta(meta: &Option) -> Option {
+ meta.as_ref()
+ .and_then(|m| m.get(SUBAGENT_SESSION_INFO_META_KEY))
+ .and_then(|v| serde_json::from_value(v.clone()).ok())
+}
+
#[derive(Debug)]
pub struct UserMessage {
pub id: Option,
@@ -102,6 +110,7 @@ impl UserMessage {
pub struct AssistantMessage {
pub chunks: Vec,
pub indented: bool,
+ pub is_subagent_output: bool,
}
impl AssistantMessage {
@@ -222,7 +231,7 @@ pub struct ToolCall {
pub raw_input_markdown: Option>,
pub raw_output: Option,
pub tool_name: Option,
- pub subagent_session_id: Option,
+ pub subagent_session_info: Option,
}
impl ToolCall {
@@ -261,7 +270,7 @@ impl ToolCall {
let tool_name = tool_name_from_meta(&tool_call.meta);
- let subagent_session = subagent_session_id_from_meta(&tool_call.meta);
+ let subagent_session_info = subagent_session_info_from_meta(&tool_call.meta);
let result = Self {
id: tool_call.tool_call_id,
@@ -276,7 +285,7 @@ impl ToolCall {
raw_input_markdown,
raw_output: tool_call.raw_output,
tool_name,
- subagent_session_id: subagent_session,
+ subagent_session_info,
};
Ok(result)
}
@@ -309,8 +318,8 @@ impl ToolCall {
self.status = status.into();
}
- if let Some(subagent_session_id) = subagent_session_id_from_meta(&meta) {
- self.subagent_session_id = Some(subagent_session_id);
+ if let Some(subagent_session_info) = subagent_session_info_from_meta(&meta) {
+ self.subagent_session_info = Some(subagent_session_info);
}
if let Some(title) = title {
@@ -401,7 +410,7 @@ impl ToolCall {
pub fn is_subagent(&self) -> bool {
self.tool_name.as_ref().is_some_and(|s| s == "spawn_agent")
- || self.subagent_session_id.is_some()
+ || self.subagent_session_info.is_some()
}
pub fn to_markdown(&self, cx: &App) -> String {
@@ -961,6 +970,10 @@ pub struct AcpThread {
pending_terminal_output: HashMap>>,
pending_terminal_exit: HashMap,
had_error: bool,
+ /// The user's unsent prompt text, persisted so it can be restored when reloading the thread.
+ draft_prompt: Option>,
+ /// The initial scroll position for the thread view, set during session registration.
+ ui_scroll_position: Option,
}
impl From<&AcpThread> for ActionLogTelemetry {
@@ -983,7 +996,7 @@ pub enum AcpThreadEvent {
ToolAuthorizationReceived(acp::ToolCallId),
Retry(RetryStatus),
SubagentSpawned(acp::SessionId),
- Stopped,
+ Stopped(acp::StopReason),
Error,
LoadError(LoadError),
PromptCapabilitiesUpdated,
@@ -1198,6 +1211,8 @@ impl AcpThread {
pending_terminal_output: HashMap::default(),
pending_terminal_exit: HashMap::default(),
had_error: false,
+ draft_prompt: None,
+ ui_scroll_position: None,
}
}
@@ -1209,6 +1224,22 @@ impl AcpThread {
self.prompt_capabilities.clone()
}
+ pub fn draft_prompt(&self) -> Option<&[acp::ContentBlock]> {
+ self.draft_prompt.as_deref()
+ }
+
+ pub fn set_draft_prompt(&mut self, prompt: Option>) {
+ self.draft_prompt = prompt;
+ }
+
+ pub fn ui_scroll_position(&self) -> Option {
+ self.ui_scroll_position
+ }
+
+ pub fn set_ui_scroll_position(&mut self, position: Option) {
+ self.ui_scroll_position = position;
+ }
+
pub fn connection(&self) -> &Rc {
&self.connection
}
@@ -1425,6 +1456,7 @@ impl AcpThread {
&& let AgentThreadEntry::AssistantMessage(AssistantMessage {
chunks,
indented: existing_indented,
+ is_subagent_output: _,
}) = last_entry
&& *existing_indented == indented
{
@@ -1456,6 +1488,7 @@ impl AcpThread {
AgentThreadEntry::AssistantMessage(AssistantMessage {
chunks: vec![chunk],
indented,
+ is_subagent_output: false,
}),
cx,
);
@@ -1525,7 +1558,7 @@ impl AcpThread {
raw_input_markdown: None,
raw_output: None,
tool_name: None,
- subagent_session_id: None,
+ subagent_session_info: None,
};
self.push_entry(AgentThreadEntry::ToolCall(failed_tool_call), cx);
return Ok(());
@@ -1589,6 +1622,7 @@ impl AcpThread {
let agent_telemetry_id = self.connection().telemetry_id();
let session = self.session_id();
+ let parent_session_id = self.parent_session_id();
if let ToolCallStatus::Completed | ToolCallStatus::Failed = status {
let status = if matches!(status, ToolCallStatus::Completed) {
"completed"
@@ -1599,6 +1633,7 @@ impl AcpThread {
"Agent Tool Call Completed",
agent_telemetry_id,
session,
+ parent_session_id,
status
);
}
@@ -1687,10 +1722,14 @@ impl AcpThread {
pub fn tool_call_for_subagent(&self, session_id: &acp::SessionId) -> Option<&ToolCall> {
self.entries.iter().find_map(|entry| match entry {
- AgentThreadEntry::ToolCall(tool_call)
- if tool_call.subagent_session_id.as_ref() == Some(session_id) =>
- {
- Some(tool_call)
+ AgentThreadEntry::ToolCall(tool_call) => {
+ if let Some(subagent_session_info) = &tool_call.subagent_session_info
+ && &subagent_session_info.session_id == session_id
+ {
+ Some(tool_call)
+ } else {
+ None
+ }
}
_ => None,
})
@@ -1698,6 +1737,7 @@ impl AcpThread {
pub fn resolve_locations(&mut self, id: acp::ToolCallId, cx: &mut Context) {
let project = self.project.clone();
+ let should_update_agent_location = self.parent_session_id.is_none();
let Some((_, tool_call)) = self.tool_call_mut(&id) else {
return;
};
@@ -1733,7 +1773,7 @@ impl AcpThread {
} else {
false
};
- if !should_ignore {
+ if !should_ignore && should_update_agent_location {
project.set_agent_location(Some(location.into()), cx);
}
});
@@ -1964,8 +2004,10 @@ impl AcpThread {
.await?;
this.update(cx, |this, cx| {
- this.project
- .update(cx, |project, cx| project.set_agent_location(None, cx));
+ if this.parent_session_id.is_none() {
+ this.project
+ .update(cx, |project, cx| project.set_agent_location(None, cx));
+ }
let Ok(response) = response else {
// tx dropped, just return
return Ok(None);
@@ -2033,7 +2075,7 @@ impl AcpThread {
}
}
- cx.emit(AcpThreadEvent::Stopped);
+ cx.emit(AcpThreadEvent::Stopped(r.stop_reason));
Ok(Some(r))
}
Err(e) => {
@@ -2237,6 +2279,7 @@ impl AcpThread {
let limit = limit.unwrap_or(u32::MAX);
let project = self.project.clone();
let action_log = self.action_log.clone();
+ let should_update_agent_location = self.parent_session_id.is_none();
cx.spawn(async move |this, cx| {
let load = project.update(cx, |project, cx| {
let path = project
@@ -2287,15 +2330,17 @@ impl AcpThread {
let start = snapshot.anchor_before(start_position);
let end = snapshot.anchor_before(Point::new(line.saturating_add(limit), 0));
- project.update(cx, |project, cx| {
- project.set_agent_location(
- Some(AgentLocation {
- buffer: buffer.downgrade(),
- position: start,
- }),
- cx,
- );
- });
+ if should_update_agent_location {
+ project.update(cx, |project, cx| {
+ project.set_agent_location(
+ Some(AgentLocation {
+ buffer: buffer.downgrade(),
+ position: start,
+ }),
+ cx,
+ );
+ });
+ }
Ok(snapshot.text_for_range(start..end).collect::())
})
@@ -2309,6 +2354,7 @@ impl AcpThread {
) -> Task> {
let project = self.project.clone();
let action_log = self.action_log.clone();
+ let should_update_agent_location = self.parent_session_id.is_none();
cx.spawn(async move |this, cx| {
let load = project.update(cx, |project, cx| {
let path = project
@@ -2336,18 +2382,20 @@ impl AcpThread {
})
.await;
- project.update(cx, |project, cx| {
- project.set_agent_location(
- Some(AgentLocation {
- buffer: buffer.downgrade(),
- position: edits
- .last()
- .map(|(range, _)| range.end)
- .unwrap_or(Anchor::min_for_buffer(buffer.read(cx).remote_id())),
- }),
- cx,
- );
- });
+ if should_update_agent_location {
+ project.update(cx, |project, cx| {
+ project.set_agent_location(
+ Some(AgentLocation {
+ buffer: buffer.downgrade(),
+ position: edits
+ .last()
+ .map(|(range, _)| range.end)
+ .unwrap_or(Anchor::min_for_buffer(buffer.read(cx).remote_id())),
+ }),
+ cx,
+ );
+ });
+ }
let format_on_save = cx.update(|cx| {
action_log.update(cx, |action_log, cx| {
@@ -2549,6 +2597,16 @@ impl AcpThread {
self.terminals.insert(terminal_id.clone(), entity.clone());
entity
}
+
+ pub fn mark_as_subagent_output(&mut self, cx: &mut Context) {
+ for entry in self.entries.iter_mut().rev() {
+ if let AgentThreadEntry::AssistantMessage(assistant_message) = entry {
+ assistant_message.is_subagent_output = true;
+ cx.notify();
+ return;
+ }
+ }
+ }
}
fn markdown_for_raw_output(
diff --git a/crates/acp_thread/src/connection.rs b/crates/acp_thread/src/connection.rs
index 0becded53762be7c96789b0d31191fd9cbc02bfe..773508f1c898c39d713d5779c82384caf8f190ec 100644
--- a/crates/acp_thread/src/connection.rs
+++ b/crates/acp_thread/src/connection.rs
@@ -496,6 +496,7 @@ mod test_support {
//! - `create_test_png_base64` for generating test images
use std::sync::Arc;
+ use std::sync::atomic::{AtomicUsize, Ordering};
use action_log::ActionLog;
use collections::HashMap;
@@ -621,7 +622,9 @@ mod test_support {
_cwd: &Path,
cx: &mut gpui::App,
) -> Task>> {
- let session_id = acp::SessionId::new(self.sessions.lock().len().to_string());
+ static NEXT_SESSION_ID: AtomicUsize = AtomicUsize::new(0);
+ let session_id =
+ acp::SessionId::new(NEXT_SESSION_ID.fetch_add(1, Ordering::SeqCst).to_string());
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let thread = cx.new(|cx| {
AcpThread::new(
diff --git a/crates/acp_thread/src/diff.rs b/crates/acp_thread/src/diff.rs
index 8886b458d623237b74f715d3c1d0def33fbefa7d..08b1b9bdf24d1ff9980164c1af8b3e60bd2f3339 100644
--- a/crates/acp_thread/src/diff.rs
+++ b/crates/acp_thread/src/diff.rs
@@ -149,6 +149,16 @@ impl Diff {
}
}
+ pub fn file_path(&self, cx: &App) -> Option {
+ match self {
+ Self::Pending(PendingDiff { new_buffer, .. }) => new_buffer
+ .read(cx)
+ .file()
+ .map(|file| file.full_path(cx).to_string_lossy().into_owned()),
+ Self::Finalized(FinalizedDiff { path, .. }) => Some(path.clone()),
+ }
+ }
+
pub fn multibuffer(&self) -> &Entity {
match self {
Self::Pending(PendingDiff { multibuffer, .. }) => multibuffer,
diff --git a/crates/acp_thread/src/mention.rs b/crates/acp_thread/src/mention.rs
index 5769d13860f2466f95fe7dd67c1f908812e40c2d..b63eec154a40de8909d13de2a4e1bd3e9d1e06f3 100644
--- a/crates/acp_thread/src/mention.rs
+++ b/crates/acp_thread/src/mention.rs
@@ -254,6 +254,41 @@ impl MentionUri {
}
}
+ pub fn tooltip_text(&self) -> Option {
+ match self {
+ MentionUri::File { abs_path } | MentionUri::Directory { abs_path } => {
+ Some(abs_path.to_string_lossy().into_owned().into())
+ }
+ MentionUri::Symbol {
+ abs_path,
+ line_range,
+ ..
+ } => Some(
+ format!(
+ "{}:{}-{}",
+ abs_path.display(),
+ line_range.start(),
+ line_range.end()
+ )
+ .into(),
+ ),
+ MentionUri::Selection {
+ abs_path: Some(path),
+ line_range,
+ ..
+ } => Some(
+ format!(
+ "{}:{}-{}",
+ path.display(),
+ line_range.start(),
+ line_range.end()
+ )
+ .into(),
+ ),
+ _ => None,
+ }
+ }
+
pub fn icon_path(&self, cx: &mut App) -> SharedString {
match self {
MentionUri::File { abs_path } => {
diff --git a/crates/action_log/Cargo.toml b/crates/action_log/Cargo.toml
index 8488df691e40ea3bcfc04f4f6f74964fba7863dd..b1a1bf824fb770b8378e596fd0c799a7cf98b13d 100644
--- a/crates/action_log/Cargo.toml
+++ b/crates/action_log/Cargo.toml
@@ -20,6 +20,7 @@ buffer_diff.workspace = true
log.workspace = true
clock.workspace = true
collections.workspace = true
+fs.workspace = true
futures.workspace = true
gpui.workspace = true
language.workspace = true
diff --git a/crates/action_log/src/action_log.rs b/crates/action_log/src/action_log.rs
index 1157d8d6f881ecb33df8104dd4be04bd9d846b5e..5679f3c58fe52057f7a4a0faa24d5b5db2b5e497 100644
--- a/crates/action_log/src/action_log.rs
+++ b/crates/action_log/src/action_log.rs
@@ -1,14 +1,20 @@
use anyhow::{Context as _, Result};
use buffer_diff::BufferDiff;
use clock;
-use collections::BTreeMap;
+use collections::{BTreeMap, HashMap};
+use fs::MTime;
use futures::{FutureExt, StreamExt, channel::mpsc};
use gpui::{
App, AppContext, AsyncApp, Context, Entity, SharedString, Subscription, Task, WeakEntity,
};
use language::{Anchor, Buffer, BufferEvent, Point, ToOffset, ToPoint};
use project::{Project, ProjectItem, lsp_store::OpenLspBufferHandle};
-use std::{cmp, ops::Range, sync::Arc};
+use std::{
+ cmp,
+ ops::Range,
+ path::{Path, PathBuf},
+ sync::Arc,
+};
use text::{Edit, Patch, Rope};
use util::{RangeExt, ResultExt as _};
@@ -48,8 +54,14 @@ pub struct ActionLog {
tracked_buffers: BTreeMap, TrackedBuffer>,
/// The project this action log is associated with
project: Entity,
+ /// An action log to forward all public methods to
+ /// Useful in cases like subagents, where we want to track individual diffs for this subagent,
+ /// but also want to associate the reads/writes with a parent review experience
+ linked_action_log: Option>,
/// Stores undo information for the most recent reject operation
last_reject_undo: Option,
+ /// Tracks the last time files were read by the agent, to detect external modifications
+ file_read_times: HashMap,
}
impl ActionLog {
@@ -58,14 +70,47 @@ impl ActionLog {
Self {
tracked_buffers: BTreeMap::default(),
project,
+ linked_action_log: None,
last_reject_undo: None,
+ file_read_times: HashMap::default(),
}
}
+ pub fn with_linked_action_log(mut self, linked_action_log: Entity) -> Self {
+ self.linked_action_log = Some(linked_action_log);
+ self
+ }
+
pub fn project(&self) -> &Entity {
&self.project
}
+ pub fn file_read_time(&self, path: &Path) -> Option {
+ self.file_read_times.get(path).copied()
+ }
+
+ fn update_file_read_time(&mut self, buffer: &Entity, cx: &App) {
+ let buffer = buffer.read(cx);
+ if let Some(file) = buffer.file() {
+ if let Some(local_file) = file.as_local() {
+ if let Some(mtime) = file.disk_state().mtime() {
+ let abs_path = local_file.abs_path(cx);
+ self.file_read_times.insert(abs_path, mtime);
+ }
+ }
+ }
+ }
+
+ fn remove_file_read_time(&mut self, buffer: &Entity, cx: &App) {
+ let buffer = buffer.read(cx);
+ if let Some(file) = buffer.file() {
+ if let Some(local_file) = file.as_local() {
+ let abs_path = local_file.abs_path(cx);
+ self.file_read_times.remove(&abs_path);
+ }
+ }
+ }
+
fn track_buffer_internal(
&mut self,
buffer: Entity,
@@ -496,16 +541,70 @@ impl ActionLog {
/// Track a buffer as read by agent, so we can notify the model about user edits.
pub fn buffer_read(&mut self, buffer: Entity, cx: &mut Context) {
+ self.buffer_read_impl(buffer, true, cx);
+ }
+
+ fn buffer_read_impl(
+ &mut self,
+ buffer: Entity,
+ record_file_read_time: bool,
+ cx: &mut Context,
+ ) {
+ if let Some(linked_action_log) = &self.linked_action_log {
+ // We don't want to share read times since the other agent hasn't read it necessarily
+ linked_action_log.update(cx, |log, cx| {
+ log.buffer_read_impl(buffer.clone(), false, cx);
+ });
+ }
+ if record_file_read_time {
+ self.update_file_read_time(&buffer, cx);
+ }
self.track_buffer_internal(buffer, false, cx);
}
/// Mark a buffer as created by agent, so we can refresh it in the context
pub fn buffer_created(&mut self, buffer: Entity, cx: &mut Context) {
+ self.buffer_created_impl(buffer, true, cx);
+ }
+
+ fn buffer_created_impl(
+ &mut self,
+ buffer: Entity,
+ record_file_read_time: bool,
+ cx: &mut Context,
+ ) {
+ if let Some(linked_action_log) = &self.linked_action_log {
+ // We don't want to share read times since the other agent hasn't read it necessarily
+ linked_action_log.update(cx, |log, cx| {
+ log.buffer_created_impl(buffer.clone(), false, cx);
+ });
+ }
+ if record_file_read_time {
+ self.update_file_read_time(&buffer, cx);
+ }
self.track_buffer_internal(buffer, true, cx);
}
/// Mark a buffer as edited by agent, so we can refresh it in the context
pub fn buffer_edited(&mut self, buffer: Entity, cx: &mut Context) {
+ self.buffer_edited_impl(buffer, true, cx);
+ }
+
+ fn buffer_edited_impl(
+ &mut self,
+ buffer: Entity,
+ record_file_read_time: bool,
+ cx: &mut Context,
+ ) {
+ if let Some(linked_action_log) = &self.linked_action_log {
+ // We don't want to share read times since the other agent hasn't read it necessarily
+ linked_action_log.update(cx, |log, cx| {
+ log.buffer_edited_impl(buffer.clone(), false, cx);
+ });
+ }
+ if record_file_read_time {
+ self.update_file_read_time(&buffer, cx);
+ }
let new_version = buffer.read(cx).version();
let tracked_buffer = self.track_buffer_internal(buffer, false, cx);
if let TrackedBufferStatus::Deleted = tracked_buffer.status {
@@ -517,6 +616,9 @@ impl ActionLog {
}
pub fn will_delete_buffer(&mut self, buffer: Entity, cx: &mut Context) {
+ // Ok to propagate file read time removal to linked action log
+ self.remove_file_read_time(&buffer, cx);
+ let has_linked_action_log = self.linked_action_log.is_some();
let tracked_buffer = self.track_buffer_internal(buffer.clone(), false, cx);
match tracked_buffer.status {
TrackedBufferStatus::Created { .. } => {
@@ -524,12 +626,24 @@ impl ActionLog {
cx.notify();
}
TrackedBufferStatus::Modified => {
- buffer.update(cx, |buffer, cx| buffer.set_text("", cx));
tracked_buffer.status = TrackedBufferStatus::Deleted;
- tracked_buffer.schedule_diff_update(ChangeAuthor::Agent, cx);
+ if !has_linked_action_log {
+ buffer.update(cx, |buffer, cx| buffer.set_text("", cx));
+ tracked_buffer.schedule_diff_update(ChangeAuthor::Agent, cx);
+ }
}
+
TrackedBufferStatus::Deleted => {}
}
+
+ if let Some(linked_action_log) = &mut self.linked_action_log {
+ linked_action_log.update(cx, |log, cx| log.will_delete_buffer(buffer.clone(), cx));
+ }
+
+ if has_linked_action_log && let Some(tracked_buffer) = self.tracked_buffers.get(&buffer) {
+ tracked_buffer.schedule_diff_update(ChangeAuthor::Agent, cx);
+ }
+
cx.notify();
}
@@ -914,15 +1028,6 @@ impl ActionLog {
.collect()
}
- /// Returns all tracked buffers for debugging purposes
- #[cfg(any(test, feature = "test-support"))]
- pub fn tracked_buffers_for_debug(
- &self,
- _cx: &App,
- ) -> impl Iterator- , &TrackedBuffer)> {
- self.tracked_buffers.iter()
- }
-
/// Iterate over buffers changed since last read or edited by the model
pub fn stale_buffers<'a>(&'a self, cx: &'a App) -> impl Iterator
- > {
self.tracked_buffers
@@ -2634,6 +2739,515 @@ mod tests {
assert!(!action_log.read_with(cx, |log, _| log.has_pending_undo()));
}
+ #[gpui::test]
+ async fn test_linked_action_log_buffer_read(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "hello world"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let parent_log = cx.new(|_| ActionLog::new(project.clone()));
+ let child_log =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
+ });
+
+ // Neither log considers the buffer stale immediately after reading it.
+ let child_stale = cx.read(|cx| {
+ child_log
+ .read(cx)
+ .stale_buffers(cx)
+ .cloned()
+ .collect::>()
+ });
+ let parent_stale = cx.read(|cx| {
+ parent_log
+ .read(cx)
+ .stale_buffers(cx)
+ .cloned()
+ .collect::>()
+ });
+ assert!(child_stale.is_empty());
+ assert!(parent_stale.is_empty());
+
+ // Simulate a user edit after the agent read the file.
+ cx.update(|cx| {
+ buffer.update(cx, |buffer, cx| {
+ buffer.edit([(0..5, "goodbye")], None, cx).unwrap();
+ });
+ });
+ cx.run_until_parked();
+
+ // Both child and parent should see the buffer as stale because both tracked
+ // it at the pre-edit version via buffer_read forwarding.
+ let child_stale = cx.read(|cx| {
+ child_log
+ .read(cx)
+ .stale_buffers(cx)
+ .cloned()
+ .collect::>()
+ });
+ let parent_stale = cx.read(|cx| {
+ parent_log
+ .read(cx)
+ .stale_buffers(cx)
+ .cloned()
+ .collect::>()
+ });
+ assert_eq!(child_stale, vec![buffer.clone()]);
+ assert_eq!(parent_stale, vec![buffer]);
+ }
+
+ #[gpui::test]
+ async fn test_linked_action_log_buffer_edited(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "abc\ndef\nghi"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let parent_log = cx.new(|_| ActionLog::new(project.clone()));
+ let child_log =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
+ buffer.update(cx, |buffer, cx| {
+ buffer
+ .edit([(Point::new(1, 0)..Point::new(1, 3), "DEF")], None, cx)
+ .unwrap();
+ });
+ child_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
+ });
+ cx.run_until_parked();
+
+ let expected_hunks = vec![(
+ buffer,
+ vec![HunkStatus {
+ range: Point::new(1, 0)..Point::new(2, 0),
+ diff_status: DiffHunkStatusKind::Modified,
+ old_text: "def\n".into(),
+ }],
+ )];
+ assert_eq!(
+ unreviewed_hunks(&child_log, cx),
+ expected_hunks,
+ "child should track the agent edit"
+ );
+ assert_eq!(
+ unreviewed_hunks(&parent_log, cx),
+ expected_hunks,
+ "parent should also track the agent edit via linked log forwarding"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_linked_action_log_buffer_created(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({})).await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let parent_log = cx.new(|_| ActionLog::new(project.clone()));
+ let child_log =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| {
+ project.find_project_path("dir/new_file", cx)
+ })
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.buffer_created(buffer.clone(), cx));
+ buffer.update(cx, |buffer, cx| buffer.set_text("hello", cx));
+ child_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
+ });
+ project
+ .update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))
+ .await
+ .unwrap();
+ cx.run_until_parked();
+
+ let expected_hunks = vec![(
+ buffer.clone(),
+ vec![HunkStatus {
+ range: Point::new(0, 0)..Point::new(0, 5),
+ diff_status: DiffHunkStatusKind::Added,
+ old_text: "".into(),
+ }],
+ )];
+ assert_eq!(
+ unreviewed_hunks(&child_log, cx),
+ expected_hunks,
+ "child should track the created file"
+ );
+ assert_eq!(
+ unreviewed_hunks(&parent_log, cx),
+ expected_hunks,
+ "parent should also track the created file via linked log forwarding"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_linked_action_log_will_delete_buffer(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "hello\n"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let parent_log = cx.new(|_| ActionLog::new(project.clone()));
+ let child_log =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path.clone(), cx))
+ .await
+ .unwrap();
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.will_delete_buffer(buffer.clone(), cx));
+ });
+ project
+ .update(cx, |project, cx| project.delete_file(file_path, false, cx))
+ .unwrap()
+ .await
+ .unwrap();
+ cx.run_until_parked();
+
+ let expected_hunks = vec![(
+ buffer.clone(),
+ vec![HunkStatus {
+ range: Point::new(0, 0)..Point::new(0, 0),
+ diff_status: DiffHunkStatusKind::Deleted,
+ old_text: "hello\n".into(),
+ }],
+ )];
+ assert_eq!(
+ unreviewed_hunks(&child_log, cx),
+ expected_hunks,
+ "child should track the deleted file"
+ );
+ assert_eq!(
+ unreviewed_hunks(&parent_log, cx),
+ expected_hunks,
+ "parent should also track the deleted file via linked log forwarding"
+ );
+ }
+
+ /// Simulates the subagent scenario: two child logs linked to the same parent, each
+ /// editing a different file. The parent accumulates all edits while each child
+ /// only sees its own.
+ #[gpui::test]
+ async fn test_linked_action_log_independent_tracking(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(
+ path!("/dir"),
+ json!({
+ "file_a": "content of a",
+ "file_b": "content of b",
+ }),
+ )
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let parent_log = cx.new(|_| ActionLog::new(project.clone()));
+ let child_log_1 =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+ let child_log_2 =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+
+ let file_a_path = project
+ .read_with(cx, |project, cx| {
+ project.find_project_path("dir/file_a", cx)
+ })
+ .unwrap();
+ let file_b_path = project
+ .read_with(cx, |project, cx| {
+ project.find_project_path("dir/file_b", cx)
+ })
+ .unwrap();
+ let buffer_a = project
+ .update(cx, |project, cx| project.open_buffer(file_a_path, cx))
+ .await
+ .unwrap();
+ let buffer_b = project
+ .update(cx, |project, cx| project.open_buffer(file_b_path, cx))
+ .await
+ .unwrap();
+
+ cx.update(|cx| {
+ child_log_1.update(cx, |log, cx| log.buffer_read(buffer_a.clone(), cx));
+ buffer_a.update(cx, |buffer, cx| {
+ buffer.edit([(0..0, "MODIFIED: ")], None, cx).unwrap();
+ });
+ child_log_1.update(cx, |log, cx| log.buffer_edited(buffer_a.clone(), cx));
+
+ child_log_2.update(cx, |log, cx| log.buffer_read(buffer_b.clone(), cx));
+ buffer_b.update(cx, |buffer, cx| {
+ buffer.edit([(0..0, "MODIFIED: ")], None, cx).unwrap();
+ });
+ child_log_2.update(cx, |log, cx| log.buffer_edited(buffer_b.clone(), cx));
+ });
+ cx.run_until_parked();
+
+ let child_1_changed: Vec<_> = cx.read(|cx| {
+ child_log_1
+ .read(cx)
+ .changed_buffers(cx)
+ .into_keys()
+ .collect()
+ });
+ let child_2_changed: Vec<_> = cx.read(|cx| {
+ child_log_2
+ .read(cx)
+ .changed_buffers(cx)
+ .into_keys()
+ .collect()
+ });
+ let parent_changed: Vec<_> = cx.read(|cx| {
+ parent_log
+ .read(cx)
+ .changed_buffers(cx)
+ .into_keys()
+ .collect()
+ });
+
+ assert_eq!(
+ child_1_changed,
+ vec![buffer_a.clone()],
+ "child 1 should only track file_a"
+ );
+ assert_eq!(
+ child_2_changed,
+ vec![buffer_b.clone()],
+ "child 2 should only track file_b"
+ );
+ assert_eq!(parent_changed.len(), 2, "parent should track both files");
+ assert!(
+ parent_changed.contains(&buffer_a) && parent_changed.contains(&buffer_b),
+ "parent should contain both buffer_a and buffer_b"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_file_read_time_recorded_on_buffer_read(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "hello world"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let action_log = cx.new(|_| ActionLog::new(project.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ let abs_path = PathBuf::from(path!("/dir/file"));
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "file_read_time should be None before buffer_read"
+ );
+
+ cx.update(|cx| {
+ action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
+ });
+
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_some()),
+ "file_read_time should be recorded after buffer_read"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_file_read_time_recorded_on_buffer_edited(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "hello world"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let action_log = cx.new(|_| ActionLog::new(project.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ let abs_path = PathBuf::from(path!("/dir/file"));
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "file_read_time should be None before buffer_edited"
+ );
+
+ cx.update(|cx| {
+ action_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
+ });
+
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_some()),
+ "file_read_time should be recorded after buffer_edited"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_file_read_time_recorded_on_buffer_created(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "existing content"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let action_log = cx.new(|_| ActionLog::new(project.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ let abs_path = PathBuf::from(path!("/dir/file"));
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "file_read_time should be None before buffer_created"
+ );
+
+ cx.update(|cx| {
+ action_log.update(cx, |log, cx| log.buffer_created(buffer.clone(), cx));
+ });
+
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_some()),
+ "file_read_time should be recorded after buffer_created"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_file_read_time_removed_on_delete(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "hello world"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let action_log = cx.new(|_| ActionLog::new(project.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ let abs_path = PathBuf::from(path!("/dir/file"));
+
+ cx.update(|cx| {
+ action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
+ });
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_some()),
+ "file_read_time should exist after buffer_read"
+ );
+
+ cx.update(|cx| {
+ action_log.update(cx, |log, cx| log.will_delete_buffer(buffer.clone(), cx));
+ });
+ assert!(
+ action_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "file_read_time should be removed after will_delete_buffer"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_file_read_time_not_forwarded_to_linked_action_log(cx: &mut TestAppContext) {
+ init_test(cx);
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(path!("/dir"), json!({"file": "hello world"}))
+ .await;
+ let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
+ let parent_log = cx.new(|_| ActionLog::new(project.clone()));
+ let child_log =
+ cx.new(|_| ActionLog::new(project.clone()).with_linked_action_log(parent_log.clone()));
+
+ let file_path = project
+ .read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
+ .unwrap();
+ let buffer = project
+ .update(cx, |project, cx| project.open_buffer(file_path, cx))
+ .await
+ .unwrap();
+
+ let abs_path = PathBuf::from(path!("/dir/file"));
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
+ });
+ assert!(
+ child_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_some()),
+ "child should record file_read_time on buffer_read"
+ );
+ assert!(
+ parent_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "parent should NOT get file_read_time from child's buffer_read"
+ );
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
+ });
+ assert!(
+ parent_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "parent should NOT get file_read_time from child's buffer_edited"
+ );
+
+ cx.update(|cx| {
+ child_log.update(cx, |log, cx| log.buffer_created(buffer.clone(), cx));
+ });
+ assert!(
+ parent_log.read_with(cx, |log, _| log.file_read_time(&abs_path).is_none()),
+ "parent should NOT get file_read_time from child's buffer_created"
+ );
+ }
+
#[derive(Debug, PartialEq)]
struct HunkStatus {
range: Range,
diff --git a/crates/agent/src/agent.rs b/crates/agent/src/agent.rs
index 759c6e3b9c8c228a6ae6bea5330819b97200b603..a93c2d2062b7472f8ed94a6ea0947a685edd204f 100644
--- a/crates/agent/src/agent.rs
+++ b/crates/agent/src/agent.rs
@@ -14,6 +14,7 @@ mod tools;
use context_server::ContextServerId;
pub use db::*;
+use itertools::Itertools;
pub use native_agent_server::NativeAgentServer;
pub use pattern_extraction::*;
pub use shell_command_parser::extract_commands;
@@ -51,6 +52,7 @@ use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use util::ResultExt;
+use util::path_list::PathList;
use util::rel_path::RelPath;
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
@@ -349,11 +351,14 @@ impl NativeAgent {
let session_id = thread.id().clone();
let parent_session_id = thread.parent_thread_id();
let title = thread.title();
+ let draft_prompt = thread.draft_prompt().map(Vec::from);
+ let scroll_position = thread.ui_scroll_position();
+ let token_usage = thread.latest_token_usage();
let project = thread.project.clone();
let action_log = thread.action_log.clone();
let prompt_capabilities_rx = thread.prompt_capabilities_rx.clone();
let acp_thread = cx.new(|cx| {
- acp_thread::AcpThread::new(
+ let mut acp_thread = acp_thread::AcpThread::new(
parent_session_id,
title,
connection,
@@ -362,18 +367,24 @@ impl NativeAgent {
session_id.clone(),
prompt_capabilities_rx,
cx,
- )
+ );
+ acp_thread.set_draft_prompt(draft_prompt);
+ acp_thread.set_ui_scroll_position(scroll_position);
+ acp_thread.update_token_usage(token_usage, cx);
+ acp_thread
});
let registry = LanguageModelRegistry::read_global(cx);
let summarization_model = registry.thread_summary_model().map(|c| c.model);
let weak = cx.weak_entity();
+ let weak_thread = thread_handle.downgrade();
thread_handle.update(cx, |thread, cx| {
thread.set_summarization_model(summarization_model, cx);
thread.add_default_tools(
Rc::new(NativeThreadEnvironment {
acp_thread: acp_thread.downgrade(),
+ thread: weak_thread,
agent: weak,
}) as _,
cx,
@@ -840,19 +851,36 @@ impl NativeAgent {
return;
}
- let database_future = ThreadsDatabase::connect(cx);
- let (id, db_thread) =
- thread.update(cx, |thread, cx| (thread.id().clone(), thread.to_db(cx)));
+ let id = thread.read(cx).id().clone();
let Some(session) = self.sessions.get_mut(&id) else {
return;
};
+
+ let folder_paths = PathList::new(
+ &self
+ .project
+ .read(cx)
+ .visible_worktrees(cx)
+ .map(|worktree| worktree.read(cx).abs_path().to_path_buf())
+ .collect::>(),
+ );
+
+ let draft_prompt = session.acp_thread.read(cx).draft_prompt().map(Vec::from);
+ let database_future = ThreadsDatabase::connect(cx);
+ let db_thread = thread.update(cx, |thread, cx| {
+ thread.set_draft_prompt(draft_prompt);
+ thread.to_db(cx)
+ });
let thread_store = self.thread_store.clone();
session.pending_save = cx.spawn(async move |_, cx| {
let Some(database) = database_future.await.map_err(|err| anyhow!(err)).log_err() else {
return;
};
let db_thread = db_thread.await;
- database.save_thread(id, db_thread).await.log_err();
+ database
+ .save_thread(id, db_thread, folder_paths)
+ .await
+ .log_err();
thread_store.update(cx, |store, cx| store.reload(cx));
});
}
@@ -1462,16 +1490,6 @@ impl NativeAgentSessionList {
}
}
- fn to_session_info(entry: DbThreadMetadata) -> AgentSessionInfo {
- AgentSessionInfo {
- session_id: entry.id,
- cwd: None,
- title: Some(entry.title),
- updated_at: Some(entry.updated_at),
- meta: None,
- }
- }
-
pub fn thread_store(&self) -> &Entity {
&self.thread_store
}
@@ -1487,7 +1505,7 @@ impl AgentSessionList for NativeAgentSessionList {
.thread_store
.read(cx)
.entries()
- .map(Self::to_session_info)
+ .map(|entry| AgentSessionInfo::from(&entry))
.collect();
Task::ready(Ok(AgentSessionListResponse::new(sessions)))
}
@@ -1576,17 +1594,19 @@ impl acp_thread::AgentSessionSetTitle for NativeAgentSessionSetTitle {
pub struct NativeThreadEnvironment {
agent: WeakEntity,
+ thread: WeakEntity,
acp_thread: WeakEntity,
}
impl NativeThreadEnvironment {
pub(crate) fn create_subagent_thread(
- agent: WeakEntity,
- parent_thread_entity: Entity,
+ &self,
label: String,
- initial_prompt: String,
cx: &mut App,
) -> Result> {
+ let Some(parent_thread_entity) = self.thread.upgrade() else {
+ anyhow::bail!("Parent thread no longer exists".to_string());
+ };
let parent_thread = parent_thread_entity.read(cx);
let current_depth = parent_thread.depth();
@@ -1605,28 +1625,29 @@ impl NativeThreadEnvironment {
let session_id = subagent_thread.read(cx).id().clone();
- let acp_thread = agent.update(cx, |agent, cx| {
+ let acp_thread = self.agent.update(cx, |agent, cx| {
agent.register_session(subagent_thread.clone(), cx)
})?;
- Self::prompt_subagent(
- session_id,
- subagent_thread,
- acp_thread,
- parent_thread_entity,
- initial_prompt,
- cx,
- )
+ let depth = current_depth + 1;
+
+ telemetry::event!(
+ "Subagent Started",
+ session = parent_thread_entity.read(cx).id().to_string(),
+ subagent_session = session_id.to_string(),
+ depth,
+ is_resumed = false,
+ );
+
+ self.prompt_subagent(session_id, subagent_thread, acp_thread)
}
pub(crate) fn resume_subagent_thread(
- agent: WeakEntity,
- parent_thread_entity: Entity,
+ &self,
session_id: acp::SessionId,
- follow_up_prompt: String,
cx: &mut App,
) -> Result> {
- let (subagent_thread, acp_thread) = agent.update(cx, |agent, _cx| {
+ let (subagent_thread, acp_thread) = self.agent.update(cx, |agent, _cx| {
let session = agent
.sessions
.get(&session_id)
@@ -1634,31 +1655,35 @@ impl NativeThreadEnvironment {
anyhow::Ok((session.thread.clone(), session.acp_thread.clone()))
})??;
- Self::prompt_subagent(
- session_id,
- subagent_thread,
- acp_thread,
- parent_thread_entity,
- follow_up_prompt,
- cx,
- )
+ let depth = subagent_thread.read(cx).depth();
+
+ if let Some(parent_thread_entity) = self.thread.upgrade() {
+ telemetry::event!(
+ "Subagent Started",
+ session = parent_thread_entity.read(cx).id().to_string(),
+ subagent_session = session_id.to_string(),
+ depth,
+ is_resumed = true,
+ );
+ }
+
+ self.prompt_subagent(session_id, subagent_thread, acp_thread)
}
fn prompt_subagent(
+ &self,
session_id: acp::SessionId,
subagent_thread: Entity,
acp_thread: Entity,
- parent_thread_entity: Entity,
- prompt: String,
- cx: &mut App,
) -> Result> {
+ let Some(parent_thread_entity) = self.thread.upgrade() else {
+ anyhow::bail!("Parent thread no longer exists".to_string());
+ };
Ok(Rc::new(NativeSubagentHandle::new(
session_id,
subagent_thread,
acp_thread,
parent_thread_entity,
- prompt,
- cx,
)) as _)
}
}
@@ -1697,36 +1722,16 @@ impl ThreadEnvironment for NativeThreadEnvironment {
})
}
- fn create_subagent(
- &self,
- parent_thread_entity: Entity,
- label: String,
- initial_prompt: String,
- cx: &mut App,
- ) -> Result> {
- Self::create_subagent_thread(
- self.agent.clone(),
- parent_thread_entity,
- label,
- initial_prompt,
- cx,
- )
+ fn create_subagent(&self, label: String, cx: &mut App) -> Result> {
+ self.create_subagent_thread(label, cx)
}
fn resume_subagent(
&self,
- parent_thread_entity: Entity,
session_id: acp::SessionId,
- follow_up_prompt: String,
cx: &mut App,
) -> Result> {
- Self::resume_subagent_thread(
- self.agent.clone(),
- parent_thread_entity,
- session_id,
- follow_up_prompt,
- cx,
- )
+ self.resume_subagent_thread(session_id, cx)
}
}
@@ -1742,8 +1747,7 @@ pub struct NativeSubagentHandle {
session_id: acp::SessionId,
parent_thread: WeakEntity,
subagent_thread: Entity,
- wait_for_prompt_to_complete: Shared>,
- _subscription: Subscription,
+ acp_thread: Entity,
}
impl NativeSubagentHandle {
@@ -1752,71 +1756,12 @@ impl NativeSubagentHandle {
subagent_thread: Entity,
acp_thread: Entity,
parent_thread_entity: Entity,
- prompt: String,
- cx: &mut App,
) -> Self {
- let ratio_before_prompt = subagent_thread
- .read(cx)
- .latest_token_usage()
- .map(|usage| usage.ratio());
-
- parent_thread_entity.update(cx, |parent_thread, _cx| {
- parent_thread.register_running_subagent(subagent_thread.downgrade())
- });
-
- let task = acp_thread.update(cx, |acp_thread, cx| {
- acp_thread.send(vec![prompt.into()], cx)
- });
-
- let (token_limit_tx, token_limit_rx) = oneshot::channel::<()>();
- let mut token_limit_tx = Some(token_limit_tx);
-
- let subscription = cx.subscribe(
- &subagent_thread,
- move |_thread, event: &TokenUsageUpdated, _cx| {
- if let Some(usage) = &event.0 {
- let old_ratio = ratio_before_prompt
- .clone()
- .unwrap_or(TokenUsageRatio::Normal);
- let new_ratio = usage.ratio();
- if old_ratio == TokenUsageRatio::Normal && new_ratio == TokenUsageRatio::Warning
- {
- if let Some(tx) = token_limit_tx.take() {
- tx.send(()).ok();
- }
- }
- }
- },
- );
-
- let wait_for_prompt_to_complete = cx
- .background_spawn(async move {
- futures::select! {
- response = task.fuse() => match response {
- Ok(Some(response)) =>{
- match response.stop_reason {
- acp::StopReason::Cancelled => SubagentPromptResult::Cancelled,
- acp::StopReason::MaxTokens => SubagentPromptResult::Error("The agent reached the maximum number of tokens.".into()),
- acp::StopReason::MaxTurnRequests => SubagentPromptResult::Error("The agent reached the maximum number of allowed requests between user turns. Try prompting again.".into()),
- acp::StopReason::Refusal => SubagentPromptResult::Error("The agent refused to process that prompt. Try again.".into()),
- acp::StopReason::EndTurn | _ => SubagentPromptResult::Completed,
- }
-
- }
- Ok(None) => SubagentPromptResult::Error("No response from the agent. You can try messaging again.".into()),
- Err(error) => SubagentPromptResult::Error(error.to_string()),
- },
- _ = token_limit_rx.fuse() => SubagentPromptResult::ContextWindowWarning,
- }
- })
- .shared();
-
NativeSubagentHandle {
session_id,
subagent_thread,
parent_thread: parent_thread_entity.downgrade(),
- wait_for_prompt_to_complete,
- _subscription: subscription,
+ acp_thread,
}
}
}
@@ -1826,22 +1771,100 @@ impl SubagentHandle for NativeSubagentHandle {
self.session_id.clone()
}
- fn wait_for_output(&self, cx: &AsyncApp) -> Task> {
- let thread = self.subagent_thread.clone();
- let wait_for_prompt = self.wait_for_prompt_to_complete.clone();
+ fn num_entries(&self, cx: &App) -> usize {
+ self.acp_thread.read(cx).entries().len()
+ }
+ fn send(&self, message: String, cx: &AsyncApp) -> Task> {
+ let thread = self.subagent_thread.clone();
+ let acp_thread = self.acp_thread.clone();
let subagent_session_id = self.session_id.clone();
let parent_thread = self.parent_thread.clone();
cx.spawn(async move |cx| {
- let result = match wait_for_prompt.await {
+ let (task, _subscription) = cx.update(|cx| {
+ let ratio_before_prompt = thread
+ .read(cx)
+ .latest_token_usage()
+ .map(|usage| usage.ratio());
+
+ parent_thread
+ .update(cx, |parent_thread, _cx| {
+ parent_thread.register_running_subagent(thread.downgrade())
+ })
+ .ok();
+
+ let task = acp_thread.update(cx, |acp_thread, cx| {
+ acp_thread.send(vec![message.into()], cx)
+ });
+
+ let (token_limit_tx, token_limit_rx) = oneshot::channel::<()>();
+ let mut token_limit_tx = Some(token_limit_tx);
+
+ let subscription = cx.subscribe(
+ &thread,
+ move |_thread, event: &TokenUsageUpdated, _cx| {
+ if let Some(usage) = &event.0 {
+ let old_ratio = ratio_before_prompt
+ .clone()
+ .unwrap_or(TokenUsageRatio::Normal);
+ let new_ratio = usage.ratio();
+ if old_ratio == TokenUsageRatio::Normal
+ && new_ratio == TokenUsageRatio::Warning
+ {
+ if let Some(tx) = token_limit_tx.take() {
+ tx.send(()).ok();
+ }
+ }
+ }
+ },
+ );
+
+ let wait_for_prompt = cx
+ .background_spawn(async move {
+ futures::select! {
+ response = task.fuse() => match response {
+ Ok(Some(response)) => {
+ match response.stop_reason {
+ acp::StopReason::Cancelled => SubagentPromptResult::Cancelled,
+ acp::StopReason::MaxTokens => SubagentPromptResult::Error("The agent reached the maximum number of tokens.".into()),
+ acp::StopReason::MaxTurnRequests => SubagentPromptResult::Error("The agent reached the maximum number of allowed requests between user turns. Try prompting again.".into()),
+ acp::StopReason::Refusal => SubagentPromptResult::Error("The agent refused to process that prompt. Try again.".into()),
+ acp::StopReason::EndTurn | _ => SubagentPromptResult::Completed,
+ }
+ }
+ Ok(None) => SubagentPromptResult::Error("No response from the agent. You can try messaging again.".into()),
+ Err(error) => SubagentPromptResult::Error(error.to_string()),
+ },
+ _ = token_limit_rx.fuse() => SubagentPromptResult::ContextWindowWarning,
+ }
+ });
+
+ (wait_for_prompt, subscription)
+ });
+
+ let result = match task.await {
SubagentPromptResult::Completed => thread.read_with(cx, |thread, _cx| {
thread
.last_message()
- .map(|m| m.to_markdown())
+ .and_then(|message| {
+ let content = message.as_agent_message()?
+ .content
+ .iter()
+ .filter_map(|c| match c {
+ AgentMessageContent::Text(text) => Some(text.as_str()),
+ _ => None,
+ })
+ .join("\n\n");
+ if content.is_empty() {
+ None
+ } else {
+ Some( content)
+ }
+ })
.context("No response from subagent")
}),
- SubagentPromptResult::Cancelled => Err(anyhow!("User cancelled")),
+ SubagentPromptResult::Cancelled => Err(anyhow!("User canceled")),
SubagentPromptResult::Error(message) => Err(anyhow!("{message}")),
SubagentPromptResult::ContextWindowWarning => {
thread.update(cx, |thread, cx| thread.cancel(cx)).await;
@@ -1910,7 +1933,9 @@ mod internal_tests {
use gpui::TestAppContext;
use indoc::formatdoc;
use language_model::fake_provider::{FakeLanguageModel, FakeLanguageModelProvider};
- use language_model::{LanguageModelProviderId, LanguageModelProviderName};
+ use language_model::{
+ LanguageModelCompletionEvent, LanguageModelProviderId, LanguageModelProviderName,
+ };
use serde_json::json;
use settings::SettingsStore;
use util::{path, rel_path::rel_path};
@@ -2542,6 +2567,13 @@ mod internal_tests {
cx.run_until_parked();
model.send_last_completion_stream_text_chunk("Lorem.");
+ model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
+ language_model::TokenUsage {
+ input_tokens: 150,
+ output_tokens: 75,
+ ..Default::default()
+ },
+ ));
model.end_last_completion_stream();
cx.run_until_parked();
summary_model
@@ -2571,6 +2603,24 @@ mod internal_tests {
cx.run_until_parked();
+ // Set a draft prompt with rich content blocks before saving.
+ let draft_blocks = vec![
+ acp::ContentBlock::Text(acp::TextContent::new("Check out ")),
+ acp::ContentBlock::ResourceLink(acp::ResourceLink::new("b.md", uri.to_string())),
+ acp::ContentBlock::Text(acp::TextContent::new(" please")),
+ ];
+ acp_thread.update(cx, |thread, _cx| {
+ thread.set_draft_prompt(Some(draft_blocks.clone()));
+ });
+ thread.update(cx, |thread, _cx| {
+ thread.set_ui_scroll_position(Some(gpui::ListOffset {
+ item_ix: 5,
+ offset_in_item: gpui::px(12.5),
+ }));
+ });
+ thread.update(cx, |_thread, cx| cx.notify());
+ cx.run_until_parked();
+
// Close the session so it can be reloaded from disk.
cx.update(|cx| connection.clone().close_session(&session_id, cx))
.await
@@ -2608,6 +2658,29 @@ mod internal_tests {
"}
)
});
+
+ // Ensure the draft prompt with rich content blocks survived the round-trip.
+ acp_thread.read_with(cx, |thread, _| {
+ assert_eq!(thread.draft_prompt(), Some(draft_blocks.as_slice()));
+ });
+
+ // Ensure token usage survived the round-trip.
+ acp_thread.read_with(cx, |thread, _| {
+ let usage = thread
+ .token_usage()
+ .expect("token usage should be restored after reload");
+ assert_eq!(usage.input_tokens, 150);
+ assert_eq!(usage.output_tokens, 75);
+ });
+
+ // Ensure scroll position survived the round-trip.
+ acp_thread.read_with(cx, |thread, _| {
+ let scroll = thread
+ .ui_scroll_position()
+ .expect("scroll position should be restored after reload");
+ assert_eq!(scroll.item_ix, 5);
+ assert_eq!(scroll.offset_in_item, gpui::px(12.5));
+ });
}
fn thread_entries(
diff --git a/crates/agent/src/db.rs b/crates/agent/src/db.rs
index 14ec9bb9af92c2f9720af5714c7344b986f5f7b5..2c9b33e4efc4f22059e2914589ca6c635b51c0e5 100644
--- a/crates/agent/src/db.rs
+++ b/crates/agent/src/db.rs
@@ -8,6 +8,7 @@ use collections::{HashMap, IndexMap};
use futures::{FutureExt, future::Shared};
use gpui::{BackgroundExecutor, Global, Task};
use indoc::indoc;
+use language_model::Speed;
use parking_lot::Mutex;
use serde::{Deserialize, Serialize};
use sqlez::{
@@ -17,23 +18,13 @@ use sqlez::{
};
use std::sync::Arc;
use ui::{App, SharedString};
+use util::path_list::PathList;
use zed_env_vars::ZED_STATELESS;
pub type DbMessage = crate::Message;
pub type DbSummary = crate::legacy_thread::DetailedSummaryState;
pub type DbLanguageModel = crate::legacy_thread::SerializedLanguageModel;
-/// Metadata about the git worktree associated with an agent thread.
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct AgentGitWorktreeInfo {
- /// The branch name in the git worktree.
- pub branch: String,
- /// Absolute path to the git worktree on disk.
- pub worktree_path: std::path::PathBuf,
- /// The base branch/commit the worktree was created from.
- pub base_ref: String,
-}
-
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DbThreadMetadata {
pub id: acp::SessionId,
@@ -41,10 +32,22 @@ pub struct DbThreadMetadata {
#[serde(alias = "summary")]
pub title: SharedString,
pub updated_at: DateTime,
- /// Denormalized from `DbThread::git_worktree_info.branch` for efficient
- /// listing without decompressing thread data. The blob is the source of
- /// truth; this column is populated on save for query convenience.
- pub worktree_branch: Option,
+ pub created_at: Option>,
+ /// The workspace folder paths this thread was created against, sorted
+ /// lexicographically. Used for grouping threads by project in the sidebar.
+ pub folder_paths: PathList,
+}
+
+impl From<&DbThreadMetadata> for acp_thread::AgentSessionInfo {
+ fn from(meta: &DbThreadMetadata) -> Self {
+ Self {
+ session_id: meta.id.clone(),
+ cwd: None,
+ title: Some(meta.title.clone()),
+ updated_at: Some(meta.updated_at),
+ meta: None,
+ }
+ }
}
#[derive(Debug, Serialize, Deserialize)]
@@ -69,7 +72,21 @@ pub struct DbThread {
#[serde(default)]
pub subagent_context: Option,
#[serde(default)]
- pub git_worktree_info: Option,
+ pub speed: Option,
+ #[serde(default)]
+ pub thinking_enabled: bool,
+ #[serde(default)]
+ pub thinking_effort: Option,
+ #[serde(default)]
+ pub draft_prompt: Option>,
+ #[serde(default)]
+ pub ui_scroll_position: Option,
+}
+
+#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)]
+pub struct SerializedScrollPosition {
+ pub item_ix: usize,
+ pub offset_in_item: f32,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -108,7 +125,11 @@ impl SharedThread {
profile: None,
imported: true,
subagent_context: None,
- git_worktree_info: None,
+ speed: None,
+ thinking_enabled: false,
+ thinking_effort: None,
+ draft_prompt: None,
+ ui_scroll_position: None,
}
}
@@ -283,7 +304,11 @@ impl DbThread {
profile: thread.profile,
imported: false,
subagent_context: None,
- git_worktree_info: None,
+ speed: None,
+ thinking_enabled: false,
+ thinking_effort: None,
+ draft_prompt: None,
+ ui_scroll_position: None,
})
}
}
@@ -389,12 +414,24 @@ impl ThreadsDatabase {
}
if let Ok(mut s) = connection.exec(indoc! {"
- ALTER TABLE threads ADD COLUMN worktree_branch TEXT
+ ALTER TABLE threads ADD COLUMN folder_paths TEXT;
+ ALTER TABLE threads ADD COLUMN folder_paths_order TEXT;
"})
{
s().ok();
}
+ if let Ok(mut s) = connection.exec(indoc! {"
+ ALTER TABLE threads ADD COLUMN created_at TEXT;
+ "})
+ {
+ if s().is_ok() {
+ connection.exec(indoc! {"
+ UPDATE threads SET created_at = updated_at WHERE created_at IS NULL
+ "})?()?;
+ }
+ }
+
let db = Self {
executor,
connection: Arc::new(Mutex::new(connection)),
@@ -407,6 +444,7 @@ impl ThreadsDatabase {
connection: &Arc>,
id: acp::SessionId,
thread: DbThread,
+ folder_paths: &PathList,
) -> Result<()> {
const COMPRESSION_LEVEL: i32 = 3;
@@ -423,10 +461,16 @@ impl ThreadsDatabase {
.subagent_context
.as_ref()
.map(|ctx| ctx.parent_thread_id.0.clone());
- let worktree_branch = thread
- .git_worktree_info
- .as_ref()
- .map(|info| info.branch.clone());
+ let serialized_folder_paths = folder_paths.serialize();
+ let (folder_paths_str, folder_paths_order_str): (Option, Option) =
+ if folder_paths.is_empty() {
+ (None, None)
+ } else {
+ (
+ Some(serialized_folder_paths.paths),
+ Some(serialized_folder_paths.order),
+ )
+ };
let json_data = serde_json::to_string(&SerializedThread {
thread,
version: DbThread::VERSION,
@@ -438,18 +482,31 @@ impl ThreadsDatabase {
let data_type = DataType::Zstd;
let data = compressed;
- let mut insert = connection.exec_bound::<(Arc, Option>, Option, String, String, DataType, Vec)>(indoc! {"
- INSERT OR REPLACE INTO threads (id, parent_id, worktree_branch, summary, updated_at, data_type, data) VALUES (?, ?, ?, ?, ?, ?, ?)
+ let created_at = Utc::now().to_rfc3339();
+
+ let mut insert = connection.exec_bound::<(Arc, Option>, Option, Option, String, String, DataType, Vec, String)>(indoc! {"
+ INSERT INTO threads (id, parent_id, folder_paths, folder_paths_order, summary, updated_at, data_type, data, created_at)
+ VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9)
+ ON CONFLICT(id) DO UPDATE SET
+ parent_id = excluded.parent_id,
+ folder_paths = excluded.folder_paths,
+ folder_paths_order = excluded.folder_paths_order,
+ summary = excluded.summary,
+ updated_at = excluded.updated_at,
+ data_type = excluded.data_type,
+ data = excluded.data
"})?;
insert((
id.0,
parent_id,
- worktree_branch,
+ folder_paths_str,
+ folder_paths_order_str,
title,
updated_at,
data_type,
data,
+ created_at,
))?;
Ok(())
@@ -462,20 +519,35 @@ impl ThreadsDatabase {
let connection = connection.lock();
let mut select = connection
- .select_bound::<(), (Arc, Option>, Option, String, String)>(indoc! {"
- SELECT id, parent_id, worktree_branch, summary, updated_at FROM threads ORDER BY updated_at DESC
+ .select_bound::<(), (Arc, Option>, Option, Option, String, String, Option)>(indoc! {"
+ SELECT id, parent_id, folder_paths, folder_paths_order, summary, updated_at, created_at FROM threads ORDER BY updated_at DESC, created_at DESC
"})?;
let rows = select(())?;
let mut threads = Vec::new();
- for (id, parent_id, worktree_branch, summary, updated_at) in rows {
+ for (id, parent_id, folder_paths, folder_paths_order, summary, updated_at, created_at) in rows {
+ let folder_paths = folder_paths
+ .map(|paths| {
+ PathList::deserialize(&util::path_list::SerializedPathList {
+ paths,
+ order: folder_paths_order.unwrap_or_default(),
+ })
+ })
+ .unwrap_or_default();
+ let created_at = created_at
+ .as_deref()
+ .map(DateTime::parse_from_rfc3339)
+ .transpose()?
+ .map(|dt| dt.with_timezone(&Utc));
+
threads.push(DbThreadMetadata {
id: acp::SessionId::new(id),
parent_session_id: parent_id.map(acp::SessionId::new),
title: summary.into(),
updated_at: DateTime::parse_from_rfc3339(&updated_at)?.with_timezone(&Utc),
- worktree_branch,
+ created_at,
+ folder_paths,
});
}
@@ -509,11 +581,16 @@ impl ThreadsDatabase {
})
}
- pub fn save_thread(&self, id: acp::SessionId, thread: DbThread) -> Task> {
+ pub fn save_thread(
+ &self,
+ id: acp::SessionId,
+ thread: DbThread,
+ folder_paths: PathList,
+ ) -> Task> {
let connection = self.connection.clone();
self.executor
- .spawn(async move { Self::save_thread_sync(&connection, id, thread) })
+ .spawn(async move { Self::save_thread_sync(&connection, id, thread, &folder_paths) })
}
pub fn delete_thread(&self, id: acp::SessionId) -> Task> {
@@ -609,12 +686,16 @@ mod tests {
profile: None,
imported: false,
subagent_context: None,
- git_worktree_info: None,
+ speed: None,
+ thinking_enabled: false,
+ thinking_effort: None,
+ draft_prompt: None,
+ ui_scroll_position: None,
}
}
#[gpui::test]
- async fn test_list_threads_orders_by_updated_at(cx: &mut TestAppContext) {
+ async fn test_list_threads_orders_by_created_at(cx: &mut TestAppContext) {
let database = ThreadsDatabase::new(cx.executor()).unwrap();
let older_id = session_id("thread-a");
@@ -630,11 +711,11 @@ mod tests {
);
database
- .save_thread(older_id.clone(), older_thread)
+ .save_thread(older_id.clone(), older_thread, PathList::default())
.await
.unwrap();
database
- .save_thread(newer_id.clone(), newer_thread)
+ .save_thread(newer_id.clone(), newer_thread, PathList::default())
.await
.unwrap();
@@ -659,11 +740,11 @@ mod tests {
);
database
- .save_thread(thread_id.clone(), original_thread)
+ .save_thread(thread_id.clone(), original_thread, PathList::default())
.await
.unwrap();
database
- .save_thread(thread_id.clone(), updated_thread)
+ .save_thread(thread_id.clone(), updated_thread, PathList::default())
.await
.unwrap();
@@ -675,6 +756,10 @@ mod tests {
entries[0].updated_at,
Utc.with_ymd_and_hms(2024, 1, 2, 0, 0, 0).unwrap()
);
+ assert!(
+ entries[0].created_at.is_some(),
+ "created_at should be populated"
+ );
}
#[test]
@@ -693,6 +778,22 @@ mod tests {
);
}
+ #[test]
+ fn test_draft_prompt_defaults_to_none() {
+ let json = r#"{
+ "title": "Old Thread",
+ "messages": [],
+ "updated_at": "2024-01-01T00:00:00Z"
+ }"#;
+
+ let db_thread: DbThread = serde_json::from_str(json).expect("Failed to deserialize");
+
+ assert!(
+ db_thread.draft_prompt.is_none(),
+ "Legacy threads without draft_prompt field should default to None"
+ );
+ }
+
#[gpui::test]
async fn test_subagent_context_roundtrips_through_save_load(cx: &mut TestAppContext) {
let database = ThreadsDatabase::new(cx.executor()).unwrap();
@@ -710,7 +811,7 @@ mod tests {
});
database
- .save_thread(child_id.clone(), child_thread)
+ .save_thread(child_id.clone(), child_thread, PathList::default())
.await
.unwrap();
@@ -738,7 +839,7 @@ mod tests {
);
database
- .save_thread(thread_id.clone(), thread)
+ .save_thread(thread_id.clone(), thread, PathList::default())
.await
.unwrap();
@@ -755,92 +856,96 @@ mod tests {
}
#[gpui::test]
- async fn test_git_worktree_info_roundtrip(cx: &mut TestAppContext) {
+ async fn test_folder_paths_roundtrip(cx: &mut TestAppContext) {
let database = ThreadsDatabase::new(cx.executor()).unwrap();
- let thread_id = session_id("worktree-thread");
- let mut thread = make_thread(
- "Worktree Thread",
+ let thread_id = session_id("folder-thread");
+ let thread = make_thread(
+ "Folder Thread",
Utc.with_ymd_and_hms(2024, 6, 15, 12, 0, 0).unwrap(),
);
- thread.git_worktree_info = Some(AgentGitWorktreeInfo {
- branch: "zed/agent/a4Xiu".to_string(),
- worktree_path: std::path::PathBuf::from("/repo/worktrees/zed/agent/a4Xiu"),
- base_ref: "main".to_string(),
- });
+
+ let folder_paths = PathList::new(&[
+ std::path::PathBuf::from("/home/user/project-a"),
+ std::path::PathBuf::from("/home/user/project-b"),
+ ]);
database
- .save_thread(thread_id.clone(), thread)
+ .save_thread(thread_id.clone(), thread, folder_paths.clone())
.await
.unwrap();
- let loaded = database
- .load_thread(thread_id)
- .await
- .unwrap()
- .expect("thread should exist");
-
- let info = loaded
- .git_worktree_info
- .expect("git_worktree_info should be restored");
- assert_eq!(info.branch, "zed/agent/a4Xiu");
- assert_eq!(
- info.worktree_path,
- std::path::PathBuf::from("/repo/worktrees/zed/agent/a4Xiu")
- );
- assert_eq!(info.base_ref, "main");
+ let threads = database.list_threads().await.unwrap();
+ assert_eq!(threads.len(), 1);
+ assert_eq!(threads[0].folder_paths, folder_paths);
}
#[gpui::test]
- async fn test_session_list_includes_worktree_meta(cx: &mut TestAppContext) {
+ async fn test_folder_paths_empty_when_not_set(cx: &mut TestAppContext) {
let database = ThreadsDatabase::new(cx.executor()).unwrap();
- // Save a thread with worktree info
- let worktree_id = session_id("wt-thread");
- let mut worktree_thread = make_thread(
- "With Worktree",
+ let thread_id = session_id("no-folder-thread");
+ let thread = make_thread(
+ "No Folder Thread",
Utc.with_ymd_and_hms(2024, 6, 15, 12, 0, 0).unwrap(),
);
- worktree_thread.git_worktree_info = Some(AgentGitWorktreeInfo {
- branch: "zed/agent/bR9kz".to_string(),
- worktree_path: std::path::PathBuf::from("/repo/worktrees/zed/agent/bR9kz"),
- base_ref: "develop".to_string(),
- });
database
- .save_thread(worktree_id.clone(), worktree_thread)
+ .save_thread(thread_id.clone(), thread, PathList::default())
.await
.unwrap();
- // Save a thread without worktree info
- let plain_id = session_id("plain-thread");
- let plain_thread = make_thread(
- "Without Worktree",
- Utc.with_ymd_and_hms(2024, 6, 15, 11, 0, 0).unwrap(),
+ let threads = database.list_threads().await.unwrap();
+ assert_eq!(threads.len(), 1);
+ assert!(threads[0].folder_paths.is_empty());
+ }
+
+ #[test]
+ fn test_scroll_position_defaults_to_none() {
+ let json = r#"{
+ "title": "Old Thread",
+ "messages": [],
+ "updated_at": "2024-01-01T00:00:00Z"
+ }"#;
+
+ let db_thread: DbThread = serde_json::from_str(json).expect("Failed to deserialize");
+
+ assert!(
+ db_thread.ui_scroll_position.is_none(),
+ "Legacy threads without scroll_position field should default to None"
+ );
+ }
+
+ #[gpui::test]
+ async fn test_scroll_position_roundtrips_through_save_load(cx: &mut TestAppContext) {
+ let database = ThreadsDatabase::new(cx.executor()).unwrap();
+
+ let thread_id = session_id("thread-with-scroll");
+
+ let mut thread = make_thread(
+ "Thread With Scroll",
+ Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap(),
);
+ thread.ui_scroll_position = Some(SerializedScrollPosition {
+ item_ix: 42,
+ offset_in_item: 13.5,
+ });
database
- .save_thread(plain_id.clone(), plain_thread)
+ .save_thread(thread_id.clone(), thread, PathList::default())
.await
.unwrap();
- // List threads and verify worktree_branch is populated correctly
- let threads = database.list_threads().await.unwrap();
- assert_eq!(threads.len(), 2);
-
- let wt_entry = threads
- .iter()
- .find(|t| t.id == worktree_id)
- .expect("should find worktree thread");
- assert_eq!(wt_entry.worktree_branch.as_deref(), Some("zed/agent/bR9kz"));
-
- let plain_entry = threads
- .iter()
- .find(|t| t.id == plain_id)
- .expect("should find plain thread");
- assert!(
- plain_entry.worktree_branch.is_none(),
- "plain thread should have no worktree_branch"
- );
+ let loaded = database
+ .load_thread(thread_id)
+ .await
+ .unwrap()
+ .expect("thread should exist");
+
+ let scroll = loaded
+ .ui_scroll_position
+ .expect("scroll_position should be restored");
+ assert_eq!(scroll.item_ix, 42);
+ assert!((scroll.offset_in_item - 13.5).abs() < f32::EPSILON);
}
}
diff --git a/crates/agent/src/edit_agent.rs b/crates/agent/src/edit_agent.rs
index 3e67cba1b63f4136a03b88c3007aee99489a6e80..e122d6b2884a593daa819457835d3d00690f5a7d 100644
--- a/crates/agent/src/edit_agent.rs
+++ b/crates/agent/src/edit_agent.rs
@@ -2,6 +2,7 @@ mod create_file_parser;
mod edit_parser;
#[cfg(test)]
mod evals;
+pub mod reindent;
pub mod streaming_fuzzy_matcher;
use crate::{Template, Templates};
@@ -24,9 +25,10 @@ use language_model::{
LanguageModelToolChoice, MessageContent, Role,
};
use project::{AgentLocation, Project};
+use reindent::{IndentDelta, Reindenter};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
-use std::{cmp, iter, mem, ops::Range, pin::Pin, sync::Arc, task::Poll};
+use std::{mem, ops::Range, pin::Pin, sync::Arc, task::Poll};
use streaming_diff::{CharOperation, StreamingDiff};
use streaming_fuzzy_matcher::StreamingFuzzyMatcher;
@@ -82,6 +84,7 @@ pub struct EditAgent {
templates: Arc,
edit_format: EditFormat,
thinking_allowed: bool,
+ update_agent_location: bool,
}
impl EditAgent {
@@ -92,6 +95,7 @@ impl EditAgent {
templates: Arc,
edit_format: EditFormat,
allow_thinking: bool,
+ update_agent_location: bool,
) -> Self {
EditAgent {
model,
@@ -100,6 +104,7 @@ impl EditAgent {
templates,
edit_format,
thinking_allowed: allow_thinking,
+ update_agent_location,
}
}
@@ -166,56 +171,73 @@ impl EditAgent {
output_events_tx: mpsc::UnboundedSender,
cx: &mut AsyncApp,
) -> Result<()> {
- cx.update(|cx| {
- buffer.update(cx, |buffer, cx| buffer.set_text("", cx));
- self.action_log.update(cx, |log, cx| {
- log.buffer_edited(buffer.clone(), cx);
- });
- self.project.update(cx, |project, cx| {
- project.set_agent_location(
- Some(AgentLocation {
- buffer: buffer.downgrade(),
- position: language::Anchor::max_for_buffer(buffer.read(cx).remote_id()),
- }),
- cx,
- )
- });
+ let buffer_id = cx.update(|cx| {
+ let buffer_id = buffer.read(cx).remote_id();
+ if self.update_agent_location {
+ self.project.update(cx, |project, cx| {
+ project.set_agent_location(
+ Some(AgentLocation {
+ buffer: buffer.downgrade(),
+ position: language::Anchor::min_for_buffer(buffer_id),
+ }),
+ cx,
+ )
+ });
+ }
+ buffer_id
+ });
+
+ let send_edit_event = || {
output_events_tx
.unbounded_send(EditAgentOutputEvent::Edited(
- Anchor::min_max_range_for_buffer(buffer.read(cx).remote_id()),
+ Anchor::min_max_range_for_buffer(buffer_id),
))
- .ok();
- });
-
+ .ok()
+ };
+ let set_agent_location = |cx: &mut _| {
+ if self.update_agent_location {
+ self.project.update(cx, |project, cx| {
+ project.set_agent_location(
+ Some(AgentLocation {
+ buffer: buffer.downgrade(),
+ position: language::Anchor::max_for_buffer(buffer_id),
+ }),
+ cx,
+ )
+ })
+ }
+ };
+ let mut first_chunk = true;
while let Some(event) = parse_rx.next().await {
match event? {
CreateFileParserEvent::NewTextChunk { chunk } => {
- let buffer_id = cx.update(|cx| {
- buffer.update(cx, |buffer, cx| buffer.append(chunk, cx));
+ cx.update(|cx| {
+ buffer.update(cx, |buffer, cx| {
+ if mem::take(&mut first_chunk) {
+ buffer.set_text(chunk, cx)
+ } else {
+ buffer.append(chunk, cx)
+ }
+ });
self.action_log
.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
- self.project.update(cx, |project, cx| {
- project.set_agent_location(
- Some(AgentLocation {
- buffer: buffer.downgrade(),
- position: language::Anchor::max_for_buffer(
- buffer.read(cx).remote_id(),
- ),
- }),
- cx,
- )
- });
- buffer.read(cx).remote_id()
+ set_agent_location(cx);
});
- output_events_tx
- .unbounded_send(EditAgentOutputEvent::Edited(
- Anchor::min_max_range_for_buffer(buffer_id),
- ))
- .ok();
+ send_edit_event();
}
}
}
+ if first_chunk {
+ cx.update(|cx| {
+ buffer.update(cx, |buffer, cx| buffer.set_text("", cx));
+ self.action_log
+ .update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
+ set_agent_location(cx);
+ });
+ send_edit_event();
+ }
+
Ok(())
}
@@ -287,15 +309,17 @@ impl EditAgent {
if let Some(old_range) = old_range {
let old_range = snapshot.anchor_before(old_range.start)
..snapshot.anchor_before(old_range.end);
- self.project.update(cx, |project, cx| {
- project.set_agent_location(
- Some(AgentLocation {
- buffer: buffer.downgrade(),
- position: old_range.end,
- }),
- cx,
- );
- });
+ if self.update_agent_location {
+ self.project.update(cx, |project, cx| {
+ project.set_agent_location(
+ Some(AgentLocation {
+ buffer: buffer.downgrade(),
+ position: old_range.end,
+ }),
+ cx,
+ );
+ });
+ }
output_events
.unbounded_send(EditAgentOutputEvent::ResolvingEditRange(old_range))
.ok();
@@ -368,15 +392,17 @@ impl EditAgent {
});
self.action_log
.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
- self.project.update(cx, |project, cx| {
- project.set_agent_location(
- Some(AgentLocation {
- buffer: buffer.downgrade(),
- position: max_edit_end,
- }),
- cx,
- );
- });
+ if self.update_agent_location {
+ self.project.update(cx, |project, cx| {
+ project.set_agent_location(
+ Some(AgentLocation {
+ buffer: buffer.downgrade(),
+ position: max_edit_end,
+ }),
+ cx,
+ );
+ });
+ }
(min_edit_start, max_edit_end)
});
output_events
@@ -540,15 +566,8 @@ impl EditAgent {
let compute_edits = cx.background_spawn(async move {
let buffer_start_indent = snapshot
.line_indent_for_row(snapshot.offset_to_point(resolved_old_text.range.start).row);
- let indent_delta = if buffer_start_indent.tabs > 0 {
- IndentDelta::Tabs(
- buffer_start_indent.tabs as isize - resolved_old_text.indent.tabs as isize,
- )
- } else {
- IndentDelta::Spaces(
- buffer_start_indent.spaces as isize - resolved_old_text.indent.spaces as isize,
- )
- };
+ let indent_delta =
+ reindent::compute_indent_delta(buffer_start_indent, resolved_old_text.indent);
let old_text = snapshot
.text_for_range(resolved_old_text.range.clone())
@@ -595,8 +614,7 @@ impl EditAgent {
delta: IndentDelta,
mut stream: impl Unpin + Stream
- >,
) -> impl Stream
- > {
- let mut buffer = String::new();
- let mut in_leading_whitespace = true;
+ let mut reindenter = Reindenter::new(delta);
let mut done = false;
futures::stream::poll_fn(move |cx| {
while !done {
@@ -609,55 +627,10 @@ impl EditAgent {
_ => return Poll::Ready(None),
};
- buffer.push_str(&chunk);
-
- let mut indented_new_text = String::new();
- let mut start_ix = 0;
- let mut newlines = buffer.match_indices('\n').peekable();
- loop {
- let (line_end, is_pending_line) = match newlines.next() {
- Some((ix, _)) => (ix, false),
- None => (buffer.len(), true),
- };
- let line = &buffer[start_ix..line_end];
-
- if in_leading_whitespace {
- if let Some(non_whitespace_ix) = line.find(|c| delta.character() != c) {
- // We found a non-whitespace character, adjust
- // indentation based on the delta.
- let new_indent_len =
- cmp::max(0, non_whitespace_ix as isize + delta.len()) as usize;
- indented_new_text
- .extend(iter::repeat(delta.character()).take(new_indent_len));
- indented_new_text.push_str(&line[non_whitespace_ix..]);
- in_leading_whitespace = false;
- } else if is_pending_line {
- // We're still in leading whitespace and this line is incomplete.
- // Stop processing until we receive more input.
- break;
- } else {
- // This line is entirely whitespace. Push it without indentation.
- indented_new_text.push_str(line);
- }
- } else {
- indented_new_text.push_str(line);
- }
-
- if is_pending_line {
- start_ix = line_end;
- break;
- } else {
- in_leading_whitespace = true;
- indented_new_text.push('\n');
- start_ix = line_end + 1;
- }
- }
- buffer.replace_range(..start_ix, "");
-
+ let mut indented_new_text = reindenter.push(&chunk);
// This was the last chunk, push all the buffered content as-is.
if is_last_chunk {
- indented_new_text.push_str(&buffer);
- buffer.clear();
+ indented_new_text.push_str(&reindenter.finish());
done = true;
}
@@ -736,6 +709,7 @@ impl EditAgent {
temperature: None,
thinking_allowed: self.thinking_allowed,
thinking_effort: None,
+ speed: None,
};
Ok(self.model.stream_completion_text(request, cx).await?.stream)
@@ -747,28 +721,6 @@ struct ResolvedOldText {
indent: LineIndent,
}
-#[derive(Copy, Clone, Debug)]
-enum IndentDelta {
- Spaces(isize),
- Tabs(isize),
-}
-
-impl IndentDelta {
- fn character(&self) -> char {
- match self {
- IndentDelta::Spaces(_) => ' ',
- IndentDelta::Tabs(_) => '\t',
- }
- }
-
- fn len(&self) -> isize {
- match self {
- IndentDelta::Spaces(n) => *n,
- IndentDelta::Tabs(n) => *n,
- }
- }
-}
-
#[cfg(test)]
mod tests {
use super::*;
@@ -1194,19 +1146,16 @@ mod tests {
);
cx.run_until_parked();
- assert_matches!(
- drain_events(&mut events).as_slice(),
- [EditAgentOutputEvent::Edited(_)]
- );
+ assert_eq!(drain_events(&mut events).as_slice(), []);
assert_eq!(
buffer.read_with(cx, |buffer, _| buffer.snapshot().text()),
- ""
+ "abc\ndef\nghi"
);
assert_eq!(
project.read_with(cx, |project, _| project.agent_location()),
Some(AgentLocation {
buffer: buffer.downgrade(),
- position: language::Anchor::max_for_buffer(
+ position: language::Anchor::min_for_buffer(
cx.update(|cx| buffer.read(cx).remote_id())
),
})
@@ -1290,6 +1239,32 @@ mod tests {
);
}
+ #[gpui::test]
+ async fn test_overwrite_no_content(cx: &mut TestAppContext) {
+ let agent = init_test(cx).await;
+ let buffer = cx.new(|cx| Buffer::local("abc\ndef\nghi", cx));
+ let (chunks_tx, chunks_rx) = mpsc::unbounded::<&str>();
+ let (apply, mut events) = agent.overwrite_with_chunks(
+ buffer.clone(),
+ chunks_rx.map(|chunk| Ok(chunk.to_string())),
+ &mut cx.to_async(),
+ );
+
+ drop(chunks_tx);
+ cx.run_until_parked();
+
+ let result = apply.await;
+ assert!(result.is_ok(),);
+ assert_matches!(
+ drain_events(&mut events).as_slice(),
+ [EditAgentOutputEvent::Edited { .. }]
+ );
+ assert_eq!(
+ buffer.read_with(cx, |buffer, _| buffer.snapshot().text()),
+ ""
+ );
+ }
+
#[gpui::test(iterations = 100)]
async fn test_indent_new_text_chunks(mut rng: StdRng) {
let chunks = to_random_chunks(&mut rng, " abc\n def\n ghi");
@@ -1426,6 +1401,7 @@ mod tests {
Templates::new(),
EditFormat::XmlTags,
thinking_allowed,
+ true,
)
}
diff --git a/crates/agent/src/edit_agent/evals.rs b/crates/agent/src/edit_agent/evals.rs
index cdf6c1c0b3f6440e4827c8b74b47a32d997b092f..2e8818b101995b374cf8172547c45b55c27c6f26 100644
--- a/crates/agent/src/edit_agent/evals.rs
+++ b/crates/agent/src/edit_agent/evals.rs
@@ -1469,6 +1469,7 @@ impl EditAgentTest {
Templates::new(),
edit_format,
true,
+ true,
),
project,
judge_model,
diff --git a/crates/agent/src/edit_agent/reindent.rs b/crates/agent/src/edit_agent/reindent.rs
new file mode 100644
index 0000000000000000000000000000000000000000..7f08749e475f6acfcf63013abd9139574112e4b5
--- /dev/null
+++ b/crates/agent/src/edit_agent/reindent.rs
@@ -0,0 +1,214 @@
+use language::LineIndent;
+use std::{cmp, iter};
+
+#[derive(Copy, Clone, Debug)]
+pub enum IndentDelta {
+ Spaces(isize),
+ Tabs(isize),
+}
+
+impl IndentDelta {
+ pub fn character(&self) -> char {
+ match self {
+ IndentDelta::Spaces(_) => ' ',
+ IndentDelta::Tabs(_) => '\t',
+ }
+ }
+
+ pub fn len(&self) -> isize {
+ match self {
+ IndentDelta::Spaces(n) => *n,
+ IndentDelta::Tabs(n) => *n,
+ }
+ }
+}
+
+pub fn compute_indent_delta(buffer_indent: LineIndent, query_indent: LineIndent) -> IndentDelta {
+ if buffer_indent.tabs > 0 {
+ IndentDelta::Tabs(buffer_indent.tabs as isize - query_indent.tabs as isize)
+ } else {
+ IndentDelta::Spaces(buffer_indent.spaces as isize - query_indent.spaces as isize)
+ }
+}
+
+/// Synchronous re-indentation adapter. Buffers incomplete lines and applies
+/// an `IndentDelta` to each line's leading whitespace before emitting it.
+pub struct Reindenter {
+ delta: IndentDelta,
+ buffer: String,
+ in_leading_whitespace: bool,
+}
+
+impl Reindenter {
+ pub fn new(delta: IndentDelta) -> Self {
+ Self {
+ delta,
+ buffer: String::new(),
+ in_leading_whitespace: true,
+ }
+ }
+
+ /// Feed a chunk of text and return the re-indented portion that is
+ /// ready to emit. Incomplete trailing lines are buffered internally.
+ pub fn push(&mut self, chunk: &str) -> String {
+ self.buffer.push_str(chunk);
+ self.drain(false)
+ }
+
+ /// Flush any remaining buffered content (call when the stream is done).
+ pub fn finish(&mut self) -> String {
+ self.drain(true)
+ }
+
+ fn drain(&mut self, is_final: bool) -> String {
+ let mut indented = String::new();
+ let mut start_ix = 0;
+ let mut newlines = self.buffer.match_indices('\n');
+ loop {
+ let (line_end, is_pending_line) = match newlines.next() {
+ Some((ix, _)) => (ix, false),
+ None => (self.buffer.len(), true),
+ };
+ let line = &self.buffer[start_ix..line_end];
+
+ if self.in_leading_whitespace {
+ if let Some(non_whitespace_ix) = line.find(|c| self.delta.character() != c) {
+ // We found a non-whitespace character, adjust indentation
+ // based on the delta.
+ let new_indent_len =
+ cmp::max(0, non_whitespace_ix as isize + self.delta.len()) as usize;
+ indented.extend(iter::repeat(self.delta.character()).take(new_indent_len));
+ indented.push_str(&line[non_whitespace_ix..]);
+ self.in_leading_whitespace = false;
+ } else if is_pending_line && !is_final {
+ // We're still in leading whitespace and this line is incomplete.
+ // Stop processing until we receive more input.
+ break;
+ } else {
+ // This line is entirely whitespace. Push it without indentation.
+ indented.push_str(line);
+ }
+ } else {
+ indented.push_str(line);
+ }
+
+ if is_pending_line {
+ start_ix = line_end;
+ break;
+ } else {
+ self.in_leading_whitespace = true;
+ indented.push('\n');
+ start_ix = line_end + 1;
+ }
+ }
+ self.buffer.replace_range(..start_ix, "");
+ if is_final {
+ indented.push_str(&self.buffer);
+ self.buffer.clear();
+ }
+ indented
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn test_indent_single_chunk() {
+ let mut r = Reindenter::new(IndentDelta::Spaces(2));
+ let out = r.push(" abc\n def\n ghi");
+ // All three lines are emitted: "ghi" starts with spaces but
+ // contains non-whitespace, so it's processed immediately.
+ assert_eq!(out, " abc\n def\n ghi");
+ let out = r.finish();
+ assert_eq!(out, "");
+ }
+
+ #[test]
+ fn test_outdent_tabs() {
+ let mut r = Reindenter::new(IndentDelta::Tabs(-2));
+ let out = r.push("\t\t\t\tabc\n\t\tdef\n\t\t\t\t\t\tghi");
+ assert_eq!(out, "\t\tabc\ndef\n\t\t\t\tghi");
+ let out = r.finish();
+ assert_eq!(out, "");
+ }
+
+ #[test]
+ fn test_incremental_chunks() {
+ let mut r = Reindenter::new(IndentDelta::Spaces(2));
+ // Feed " ab" — the `a` is non-whitespace, so the line is
+ // processed immediately even without a trailing newline.
+ let out = r.push(" ab");
+ assert_eq!(out, " ab");
+ // Feed "c\n" — appended to the already-processed line (no longer
+ // in leading whitespace).
+ let out = r.push("c\n");
+ assert_eq!(out, "c\n");
+ let out = r.finish();
+ assert_eq!(out, "");
+ }
+
+ #[test]
+ fn test_zero_delta() {
+ let mut r = Reindenter::new(IndentDelta::Spaces(0));
+ let out = r.push(" hello\n world\n");
+ assert_eq!(out, " hello\n world\n");
+ let out = r.finish();
+ assert_eq!(out, "");
+ }
+
+ #[test]
+ fn test_clamp_negative_indent() {
+ let mut r = Reindenter::new(IndentDelta::Spaces(-10));
+ let out = r.push(" abc\n");
+ // max(0, 2 - 10) = 0, so no leading spaces.
+ assert_eq!(out, "abc\n");
+ let out = r.finish();
+ assert_eq!(out, "");
+ }
+
+ #[test]
+ fn test_whitespace_only_lines() {
+ let mut r = Reindenter::new(IndentDelta::Spaces(2));
+ let out = r.push(" \n code\n");
+ // First line is all whitespace — emitted verbatim. Second line is indented.
+ assert_eq!(out, " \n code\n");
+ let out = r.finish();
+ assert_eq!(out, "");
+ }
+
+ #[test]
+ fn test_compute_indent_delta_spaces() {
+ let buffer = LineIndent {
+ tabs: 0,
+ spaces: 8,
+ line_blank: false,
+ };
+ let query = LineIndent {
+ tabs: 0,
+ spaces: 4,
+ line_blank: false,
+ };
+ let delta = compute_indent_delta(buffer, query);
+ assert_eq!(delta.len(), 4);
+ assert_eq!(delta.character(), ' ');
+ }
+
+ #[test]
+ fn test_compute_indent_delta_tabs() {
+ let buffer = LineIndent {
+ tabs: 2,
+ spaces: 0,
+ line_blank: false,
+ };
+ let query = LineIndent {
+ tabs: 3,
+ spaces: 0,
+ line_blank: false,
+ };
+ let delta = compute_indent_delta(buffer, query);
+ assert_eq!(delta.len(), -1);
+ assert_eq!(delta.character(), '\t');
+ }
+}
diff --git a/crates/agent/src/native_agent_server.rs b/crates/agent/src/native_agent_server.rs
index 4d8bdaf698cb6bc50f6080c9b029954242a56f14..18c41670ac4b4ba3146fb207992a7020a44fbd5f 100644
--- a/crates/agent/src/native_agent_server.rs
+++ b/crates/agent/src/native_agent_server.rs
@@ -1,4 +1,4 @@
-use std::{any::Any, path::Path, rc::Rc, sync::Arc};
+use std::{any::Any, rc::Rc, sync::Arc};
use agent_client_protocol as acp;
use agent_servers::{AgentServer, AgentServerDelegate};
@@ -35,19 +35,10 @@ impl AgentServer for NativeAgentServer {
fn connect(
&self,
- _root_dir: Option<&Path>,
delegate: AgentServerDelegate,
cx: &mut App,
- ) -> Task<
- Result<(
- Rc,
- Option,
- )>,
- > {
- log::debug!(
- "NativeAgentServer::connect called for path: {:?}",
- _root_dir
- );
+ ) -> Task>> {
+ log::debug!("NativeAgentServer::connect");
let project = delegate.project().clone();
let fs = self.fs.clone();
let thread_store = self.thread_store.clone();
@@ -66,10 +57,7 @@ impl AgentServer for NativeAgentServer {
let connection = NativeAgentConnection(agent);
log::debug!("NativeAgentServer connection established successfully");
- Ok((
- Rc::new(connection) as Rc,
- None,
- ))
+ Ok(Rc::new(connection) as Rc)
})
}
diff --git a/crates/agent/src/tests/edit_file_thread_test.rs b/crates/agent/src/tests/edit_file_thread_test.rs
index 069bf0349299e6f4952f673cbf7607e52d48d9c5..3beb5cb0d51abc55fbf3cf0849ced248a9d1fa5c 100644
--- a/crates/agent/src/tests/edit_file_thread_test.rs
+++ b/crates/agent/src/tests/edit_file_thread_test.rs
@@ -50,9 +50,9 @@ async fn test_edit_file_tool_in_thread_context(cx: &mut TestAppContext) {
// Add just the tools we need for this test
let language_registry = project.read(cx).languages().clone();
thread.add_tool(crate::ReadFileTool::new(
- cx.weak_entity(),
project.clone(),
thread.action_log().clone(),
+ true,
));
thread.add_tool(crate::EditFileTool::new(
project.clone(),
diff --git a/crates/agent/src/tests/mod.rs b/crates/agent/src/tests/mod.rs
index 139242fdee9da968986b3fc9537bf9e5292b7dc5..0993b43a13ced62000692bf2b0b35d3ab7fb68e7 100644
--- a/crates/agent/src/tests/mod.rs
+++ b/crates/agent/src/tests/mod.rs
@@ -159,7 +159,7 @@ impl crate::TerminalHandle for FakeTerminalHandle {
struct FakeSubagentHandle {
session_id: acp::SessionId,
- wait_for_summary_task: Shared>,
+ send_task: Shared>,
}
impl SubagentHandle for FakeSubagentHandle {
@@ -167,8 +167,12 @@ impl SubagentHandle for FakeSubagentHandle {
self.session_id.clone()
}
- fn wait_for_output(&self, cx: &AsyncApp) -> Task> {
- let task = self.wait_for_summary_task.clone();
+ fn num_entries(&self, _cx: &App) -> usize {
+ unimplemented!()
+ }
+
+ fn send(&self, _message: String, cx: &AsyncApp) -> Task> {
+ let task = self.send_task.clone();
cx.background_spawn(async move { Ok(task.await) })
}
}
@@ -203,13 +207,7 @@ impl crate::ThreadEnvironment for FakeThreadEnvironment {
Task::ready(Ok(handle as Rc))
}
- fn create_subagent(
- &self,
- _parent_thread: Entity,
- _label: String,
- _initial_prompt: String,
- _cx: &mut App,
- ) -> Result> {
+ fn create_subagent(&self, _label: String, _cx: &mut App) -> Result> {
Ok(self
.subagent_handle
.clone()
@@ -248,13 +246,7 @@ impl crate::ThreadEnvironment for MultiTerminalEnvironment {
Task::ready(Ok(handle as Rc))
}
- fn create_subagent(
- &self,
- _parent_thread: Entity,
- _label: String,
- _initial_prompt: String,
- _cx: &mut App,
- ) -> Result> {
+ fn create_subagent(&self, _label: String, _cx: &mut App) -> Result> {
unimplemented!()
}
}
@@ -285,8 +277,17 @@ async fn test_echo(cx: &mut TestAppContext) {
let events = events.collect().await;
thread.update(cx, |thread, _cx| {
- assert_eq!(thread.last_message().unwrap().role(), Role::Assistant);
- assert_eq!(thread.last_message().unwrap().to_markdown(), "Hello\n")
+ assert_eq!(
+ thread.last_received_or_pending_message().unwrap().role(),
+ Role::Assistant
+ );
+ assert_eq!(
+ thread
+ .last_received_or_pending_message()
+ .unwrap()
+ .to_markdown(),
+ "Hello\n"
+ )
});
assert_eq!(stop_events(events), vec![acp::StopReason::EndTurn]);
}
@@ -310,11 +311,11 @@ async fn test_terminal_tool_timeout_kills_handle(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::TerminalToolInput {
+ ToolInput::resolved(crate::TerminalToolInput {
command: "sleep 1000".to_string(),
cd: ".".to_string(),
timeout_ms: Some(5),
- },
+ }),
event_stream,
cx,
)
@@ -377,11 +378,11 @@ async fn test_terminal_tool_without_timeout_does_not_kill_handle(cx: &mut TestAp
let _task = cx.update(|cx| {
tool.run(
- crate::TerminalToolInput {
+ ToolInput::resolved(crate::TerminalToolInput {
command: "sleep 1000".to_string(),
cd: ".".to_string(),
timeout_ms: None,
- },
+ }),
event_stream,
cx,
)
@@ -438,9 +439,15 @@ async fn test_thinking(cx: &mut TestAppContext) {
let events = events.collect().await;
thread.update(cx, |thread, _cx| {
- assert_eq!(thread.last_message().unwrap().role(), Role::Assistant);
assert_eq!(
- thread.last_message().unwrap().to_markdown(),
+ thread.last_received_or_pending_message().unwrap().role(),
+ Role::Assistant
+ );
+ assert_eq!(
+ thread
+ .last_received_or_pending_message()
+ .unwrap()
+ .to_markdown(),
indoc! {"
Think
Hello
@@ -718,7 +725,7 @@ async fn test_basic_tool_calls(cx: &mut TestAppContext) {
thread.update(cx, |thread, _cx| {
assert!(
thread
- .last_message()
+ .last_received_or_pending_message()
.unwrap()
.as_agent_message()
.unwrap()
@@ -755,7 +762,7 @@ async fn test_streaming_tool_calls(cx: &mut TestAppContext) {
if let Ok(ThreadEvent::ToolCall(tool_call)) = event {
thread.update(cx, |thread, _cx| {
// Look for a tool use in the thread's last message
- let message = thread.last_message().unwrap();
+ let message = thread.last_received_or_pending_message().unwrap();
let agent_message = message.as_agent_message().unwrap();
let last_content = agent_message.content.last().unwrap();
if let AgentMessageContent::ToolUse(last_tool_use) = last_content {
@@ -1225,7 +1232,7 @@ async fn test_concurrent_tool_calls(cx: &mut TestAppContext) {
assert_eq!(stop_reasons, vec![acp::StopReason::EndTurn]);
thread.update(cx, |thread, _cx| {
- let last_message = thread.last_message().unwrap();
+ let last_message = thread.last_received_or_pending_message().unwrap();
let agent_message = last_message.as_agent_message().unwrap();
let text = agent_message
.content
@@ -1931,7 +1938,7 @@ async fn test_cancellation(cx: &mut TestAppContext) {
.collect::>()
.await;
thread.update(cx, |thread, _cx| {
- let message = thread.last_message().unwrap();
+ let message = thread.last_received_or_pending_message().unwrap();
let agent_message = message.as_agent_message().unwrap();
assert_eq!(
agent_message.content,
@@ -2000,7 +2007,7 @@ async fn test_terminal_tool_cancellation_captures_output(cx: &mut TestAppContext
// Verify the tool result contains the terminal output, not just "Tool canceled by user"
thread.update(cx, |thread, _cx| {
- let message = thread.last_message().unwrap();
+ let message = thread.last_received_or_pending_message().unwrap();
let agent_message = message.as_agent_message().unwrap();
let tool_use = agent_message
@@ -2156,7 +2163,7 @@ async fn verify_thread_recovery(
let events = events.collect::>().await;
thread.update(cx, |thread, _cx| {
- let message = thread.last_message().unwrap();
+ let message = thread.last_received_or_pending_message().unwrap();
let agent_message = message.as_agent_message().unwrap();
assert_eq!(
agent_message.content,
@@ -2465,7 +2472,7 @@ async fn test_terminal_tool_stopped_via_terminal_card_button(cx: &mut TestAppCon
// Verify the tool result indicates user stopped
thread.update(cx, |thread, _cx| {
- let message = thread.last_message().unwrap();
+ let message = thread.last_received_or_pending_message().unwrap();
let agent_message = message.as_agent_message().unwrap();
let tool_use = agent_message
@@ -2560,7 +2567,7 @@ async fn test_terminal_tool_timeout_expires(cx: &mut TestAppContext) {
// Verify the tool result indicates timeout, not user stopped
thread.update(cx, |thread, _cx| {
- let message = thread.last_message().unwrap();
+ let message = thread.last_received_or_pending_message().unwrap();
let agent_message = message.as_agent_message().unwrap();
let tool_use = agent_message
@@ -2624,6 +2631,84 @@ async fn test_in_progress_send_canceled_by_next_send(cx: &mut TestAppContext) {
assert_eq!(stop_events(events_2), vec![acp::StopReason::EndTurn]);
}
+#[gpui::test]
+async fn test_retry_cancelled_promptly_on_new_send(cx: &mut TestAppContext) {
+ // Regression test: when a completion fails with a retryable error (e.g. upstream 500),
+ // the retry loop waits on a timer. If the user switches models and sends a new message
+ // during that delay, the old turn should exit immediately instead of retrying with the
+ // stale model.
+ let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
+ let model_a = model.as_fake();
+
+ // Start a turn with model_a.
+ let events_1 = thread
+ .update(cx, |thread, cx| {
+ thread.send(UserMessageId::new(), ["Hello"], cx)
+ })
+ .unwrap();
+ cx.run_until_parked();
+ assert_eq!(model_a.completion_count(), 1);
+
+ // Model returns a retryable upstream 500. The turn enters the retry delay.
+ model_a.send_last_completion_stream_error(
+ LanguageModelCompletionError::UpstreamProviderError {
+ message: "Internal server error".to_string(),
+ status: http_client::StatusCode::INTERNAL_SERVER_ERROR,
+ retry_after: None,
+ },
+ );
+ model_a.end_last_completion_stream();
+ cx.run_until_parked();
+
+ // The old completion was consumed; model_a has no pending requests yet because the
+ // retry timer hasn't fired.
+ assert_eq!(model_a.completion_count(), 0);
+
+ // Switch to model_b and send a new message. This cancels the old turn.
+ let model_b = Arc::new(FakeLanguageModel::with_id_and_thinking(
+ "fake", "model-b", "Model B", false,
+ ));
+ thread.update(cx, |thread, cx| {
+ thread.set_model(model_b.clone(), cx);
+ });
+ let events_2 = thread
+ .update(cx, |thread, cx| {
+ thread.send(UserMessageId::new(), ["Continue"], cx)
+ })
+ .unwrap();
+ cx.run_until_parked();
+
+ // model_b should have received its completion request.
+ assert_eq!(model_b.as_fake().completion_count(), 1);
+
+ // Advance the clock well past the retry delay (BASE_RETRY_DELAY = 5s).
+ cx.executor().advance_clock(Duration::from_secs(10));
+ cx.run_until_parked();
+
+ // model_a must NOT have received another completion request — the cancelled turn
+ // should have exited during the retry delay rather than retrying with the old model.
+ assert_eq!(
+ model_a.completion_count(),
+ 0,
+ "old model should not receive a retry request after cancellation"
+ );
+
+ // Complete model_b's turn.
+ model_b
+ .as_fake()
+ .send_last_completion_stream_text_chunk("Done!");
+ model_b
+ .as_fake()
+ .send_last_completion_stream_event(LanguageModelCompletionEvent::Stop(StopReason::EndTurn));
+ model_b.as_fake().end_last_completion_stream();
+
+ let events_1 = events_1.collect::>().await;
+ assert_eq!(stop_events(events_1), vec![acp::StopReason::Cancelled]);
+
+ let events_2 = events_2.collect::>().await;
+ assert_eq!(stop_events(events_2), vec![acp::StopReason::EndTurn]);
+}
+
#[gpui::test]
async fn test_subsequent_successful_sends_dont_cancel(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
@@ -3456,7 +3541,7 @@ async fn test_send_retry_finishes_tool_calls_on_error(cx: &mut TestAppContext) {
events.collect::>().await;
thread.read_with(cx, |thread, _cx| {
assert_eq!(
- thread.last_message(),
+ thread.last_received_or_pending_message(),
Some(Message::Agent(AgentMessage {
content: vec![AgentMessageContent::Text("Done".into())],
tool_results: IndexMap::default(),
@@ -3520,6 +3605,113 @@ async fn test_send_max_retries_exceeded(cx: &mut TestAppContext) {
));
}
+#[gpui::test]
+async fn test_streaming_tool_completes_when_llm_stream_ends_without_final_input(
+ cx: &mut TestAppContext,
+) {
+ init_test(cx);
+ always_allow_tools(cx);
+
+ let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
+ let fake_model = model.as_fake();
+
+ thread.update(cx, |thread, _cx| {
+ thread.add_tool(StreamingEchoTool);
+ });
+
+ let _events = thread
+ .update(cx, |thread, cx| {
+ thread.send(UserMessageId::new(), ["Use the streaming_echo tool"], cx)
+ })
+ .unwrap();
+ cx.run_until_parked();
+
+ // Send a partial tool use (is_input_complete = false), simulating the LLM
+ // streaming input for a tool.
+ let tool_use = LanguageModelToolUse {
+ id: "tool_1".into(),
+ name: "streaming_echo".into(),
+ raw_input: r#"{"text": "partial"}"#.into(),
+ input: json!({"text": "partial"}),
+ is_input_complete: false,
+ thought_signature: None,
+ };
+ fake_model
+ .send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(tool_use.clone()));
+ cx.run_until_parked();
+
+ // Send a stream error WITHOUT ever sending is_input_complete = true.
+ // Before the fix, this would deadlock: the tool waits for more partials
+ // (or cancellation), run_turn_internal waits for the tool, and the sender
+ // keeping the channel open lives inside RunningTurn.
+ fake_model.send_last_completion_stream_error(
+ LanguageModelCompletionError::UpstreamProviderError {
+ message: "Internal server error".to_string(),
+ status: http_client::StatusCode::INTERNAL_SERVER_ERROR,
+ retry_after: None,
+ },
+ );
+ fake_model.end_last_completion_stream();
+
+ // Advance past the retry delay so run_turn_internal retries.
+ cx.executor().advance_clock(Duration::from_secs(5));
+ cx.run_until_parked();
+
+ // The retry request should contain the streaming tool's error result,
+ // proving the tool terminated and its result was forwarded.
+ let completion = fake_model
+ .pending_completions()
+ .pop()
+ .expect("No running turn");
+ assert_eq!(
+ completion.messages[1..],
+ vec![
+ LanguageModelRequestMessage {
+ role: Role::User,
+ content: vec!["Use the streaming_echo tool".into()],
+ cache: false,
+ reasoning_details: None,
+ },
+ LanguageModelRequestMessage {
+ role: Role::Assistant,
+ content: vec![language_model::MessageContent::ToolUse(tool_use.clone())],
+ cache: false,
+ reasoning_details: None,
+ },
+ LanguageModelRequestMessage {
+ role: Role::User,
+ content: vec![language_model::MessageContent::ToolResult(
+ LanguageModelToolResult {
+ tool_use_id: tool_use.id.clone(),
+ tool_name: tool_use.name,
+ is_error: true,
+ content: "Failed to receive tool input: tool input was not fully received"
+ .into(),
+ output: Some(
+ "Failed to receive tool input: tool input was not fully received"
+ .into()
+ ),
+ }
+ )],
+ cache: true,
+ reasoning_details: None,
+ },
+ ]
+ );
+
+ // Finish the retry round so the turn completes cleanly.
+ fake_model.send_last_completion_stream_text_chunk("Done");
+ fake_model.end_last_completion_stream();
+ cx.run_until_parked();
+
+ thread.read_with(cx, |thread, _cx| {
+ assert!(
+ thread.is_turn_complete(),
+ "Thread should not be stuck; the turn should have completed",
+ );
+ });
+}
+
/// Filters out the stop events for asserting against in tests
fn stop_events(result_events: Vec>) -> Vec {
result_events
@@ -3575,6 +3767,7 @@ async fn setup(cx: &mut TestAppContext, model: TestModel) -> ThreadTest {
ToolRequiringPermission::NAME: true,
InfiniteTool::NAME: true,
CancellationAwareTool::NAME: true,
+ StreamingEchoTool::NAME: true,
(TerminalTool::NAME): true,
}
}
@@ -3991,11 +4184,11 @@ async fn test_terminal_tool_permission_rules(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::TerminalToolInput {
+ ToolInput::resolved(crate::TerminalToolInput {
command: "rm -rf /".to_string(),
cd: ".".to_string(),
timeout_ms: None,
- },
+ }),
event_stream,
cx,
)
@@ -4043,11 +4236,11 @@ async fn test_terminal_tool_permission_rules(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::TerminalToolInput {
+ ToolInput::resolved(crate::TerminalToolInput {
command: "echo hello".to_string(),
cd: ".".to_string(),
timeout_ms: None,
- },
+ }),
event_stream,
cx,
)
@@ -4101,11 +4294,11 @@ async fn test_terminal_tool_permission_rules(cx: &mut TestAppContext) {
let _task = cx.update(|cx| {
tool.run(
- crate::TerminalToolInput {
+ ToolInput::resolved(crate::TerminalToolInput {
command: "sudo rm file".to_string(),
cd: ".".to_string(),
timeout_ms: None,
- },
+ }),
event_stream,
cx,
)
@@ -4148,11 +4341,11 @@ async fn test_terminal_tool_permission_rules(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::TerminalToolInput {
+ ToolInput::resolved(crate::TerminalToolInput {
command: "echo hello".to_string(),
cd: ".".to_string(),
timeout_ms: None,
- },
+ }),
event_stream,
cx,
)
@@ -4306,6 +4499,160 @@ async fn test_subagent_tool_call_end_to_end(cx: &mut TestAppContext) {
subagent task response
+ ## Assistant
+
+ Response
+
+ "#},
+ );
+}
+
+#[gpui::test]
+async fn test_subagent_tool_output_does_not_include_thinking(cx: &mut TestAppContext) {
+ init_test(cx);
+ cx.update(|cx| {
+ LanguageModelRegistry::test(cx);
+ });
+ cx.update(|cx| {
+ cx.update_flags(true, vec!["subagents".to_string()]);
+ });
+
+ let fs = FakeFs::new(cx.executor());
+ fs.insert_tree(
+ "/",
+ json!({
+ "a": {
+ "b.md": "Lorem"
+ }
+ }),
+ )
+ .await;
+ let project = Project::test(fs.clone(), [path!("/a").as_ref()], cx).await;
+ let thread_store = cx.new(|cx| ThreadStore::new(cx));
+ let agent = NativeAgent::new(
+ project.clone(),
+ thread_store.clone(),
+ Templates::new(),
+ None,
+ fs.clone(),
+ &mut cx.to_async(),
+ )
+ .await
+ .unwrap();
+ let connection = Rc::new(NativeAgentConnection(agent.clone()));
+
+ let acp_thread = cx
+ .update(|cx| {
+ connection
+ .clone()
+ .new_session(project.clone(), Path::new(""), cx)
+ })
+ .await
+ .unwrap();
+ let session_id = acp_thread.read_with(cx, |thread, _| thread.session_id().clone());
+ let thread = agent.read_with(cx, |agent, _| {
+ agent.sessions.get(&session_id).unwrap().thread.clone()
+ });
+ let model = Arc::new(FakeLanguageModel::default());
+
+ // Ensure empty threads are not saved, even if they get mutated.
+ thread.update(cx, |thread, cx| {
+ thread.set_model(model.clone(), cx);
+ });
+ cx.run_until_parked();
+
+ let send = acp_thread.update(cx, |thread, cx| thread.send_raw("Prompt", cx));
+ cx.run_until_parked();
+ model.send_last_completion_stream_text_chunk("spawning subagent");
+ let subagent_tool_input = SpawnAgentToolInput {
+ label: "label".to_string(),
+ message: "subagent task prompt".to_string(),
+ session_id: None,
+ };
+ let subagent_tool_use = LanguageModelToolUse {
+ id: "subagent_1".into(),
+ name: SpawnAgentTool::NAME.into(),
+ raw_input: serde_json::to_string(&subagent_tool_input).unwrap(),
+ input: serde_json::to_value(&subagent_tool_input).unwrap(),
+ is_input_complete: true,
+ thought_signature: None,
+ };
+ model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(
+ subagent_tool_use,
+ ));
+ model.end_last_completion_stream();
+
+ cx.run_until_parked();
+
+ let subagent_session_id = thread.read_with(cx, |thread, cx| {
+ thread
+ .running_subagent_ids(cx)
+ .get(0)
+ .expect("subagent thread should be running")
+ .clone()
+ });
+
+ let subagent_thread = agent.read_with(cx, |agent, _cx| {
+ agent
+ .sessions
+ .get(&subagent_session_id)
+ .expect("subagent session should exist")
+ .acp_thread
+ .clone()
+ });
+
+ model.send_last_completion_stream_text_chunk("subagent task response 1");
+ model.send_last_completion_stream_event(LanguageModelCompletionEvent::Thinking {
+ text: "thinking more about the subagent task".into(),
+ signature: None,
+ });
+ model.send_last_completion_stream_text_chunk("subagent task response 2");
+ model.end_last_completion_stream();
+
+ cx.run_until_parked();
+
+ assert_eq!(
+ subagent_thread.read_with(cx, |thread, cx| thread.to_markdown(cx)),
+ indoc! {"
+ ## User
+
+ subagent task prompt
+
+ ## Assistant
+
+ subagent task response 1
+
+
+ thinking more about the subagent task
+
+
+ subagent task response 2
+
+ "}
+ );
+
+ model.send_last_completion_stream_text_chunk("Response");
+ model.end_last_completion_stream();
+
+ send.await.unwrap();
+
+ assert_eq!(
+ acp_thread.read_with(cx, |thread, cx| thread.to_markdown(cx)),
+ indoc! {r#"
+ ## User
+
+ Prompt
+
+ ## Assistant
+
+ spawning subagent
+
+ **Tool Call: label**
+ Status: Completed
+
+ subagent task response 1
+
+ subagent task response 2
## Assistant
@@ -5309,11 +5656,11 @@ async fn test_edit_file_tool_deny_rule_blocks_edit(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::EditFileToolInput {
+ ToolInput::resolved(crate::EditFileToolInput {
display_description: "Edit sensitive file".to_string(),
path: "root/sensitive_config.txt".into(),
mode: crate::EditFileMode::Edit,
- },
+ }),
event_stream,
cx,
)
@@ -5359,9 +5706,9 @@ async fn test_delete_path_tool_deny_rule_blocks_deletion(cx: &mut TestAppContext
let task = cx.update(|cx| {
tool.run(
- crate::DeletePathToolInput {
+ ToolInput::resolved(crate::DeletePathToolInput {
path: "root/important_data.txt".to_string(),
- },
+ }),
event_stream,
cx,
)
@@ -5411,10 +5758,10 @@ async fn test_move_path_tool_denies_if_destination_denied(cx: &mut TestAppContex
let task = cx.update(|cx| {
tool.run(
- crate::MovePathToolInput {
+ ToolInput::resolved(crate::MovePathToolInput {
source_path: "root/safe.txt".to_string(),
destination_path: "root/protected/safe.txt".to_string(),
- },
+ }),
event_stream,
cx,
)
@@ -5467,10 +5814,10 @@ async fn test_move_path_tool_denies_if_source_denied(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::MovePathToolInput {
+ ToolInput::resolved(crate::MovePathToolInput {
source_path: "root/secret.txt".to_string(),
destination_path: "root/public/not_secret.txt".to_string(),
- },
+ }),
event_stream,
cx,
)
@@ -5525,10 +5872,10 @@ async fn test_copy_path_tool_deny_rule_blocks_copy(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::CopyPathToolInput {
+ ToolInput::resolved(crate::CopyPathToolInput {
source_path: "root/confidential.txt".to_string(),
destination_path: "root/dest/copy.txt".to_string(),
- },
+ }),
event_stream,
cx,
)
@@ -5580,12 +5927,12 @@ async fn test_save_file_tool_denies_if_any_path_denied(cx: &mut TestAppContext)
let task = cx.update(|cx| {
tool.run(
- crate::SaveFileToolInput {
+ ToolInput::resolved(crate::SaveFileToolInput {
paths: vec![
std::path::PathBuf::from("root/normal.txt"),
std::path::PathBuf::from("root/readonly/config.txt"),
],
- },
+ }),
event_stream,
cx,
)
@@ -5632,9 +5979,9 @@ async fn test_save_file_tool_respects_deny_rules(cx: &mut TestAppContext) {
let task = cx.update(|cx| {
tool.run(
- crate::SaveFileToolInput {
+ ToolInput::resolved(crate::SaveFileToolInput {
paths: vec![std::path::PathBuf::from("root/config.secret")],
- },
+ }),
event_stream,
cx,
)
@@ -5676,7 +6023,7 @@ async fn test_web_search_tool_deny_rule_blocks_search(cx: &mut TestAppContext) {
let input: crate::WebSearchToolInput =
serde_json::from_value(json!({"query": "internal.company.com secrets"})).unwrap();
- let task = cx.update(|cx| tool.run(input, event_stream, cx));
+ let task = cx.update(|cx| tool.run(ToolInput::resolved(input), event_stream, cx));
let result = task.await;
assert!(result.is_err(), "expected search to be blocked");
@@ -5741,11 +6088,11 @@ async fn test_edit_file_tool_allow_rule_skips_confirmation(cx: &mut TestAppConte
let _task = cx.update(|cx| {
tool.run(
- crate::EditFileToolInput {
+ ToolInput::resolved(crate::EditFileToolInput {
display_description: "Edit README".to_string(),
path: "root/README.md".into(),
mode: crate::EditFileMode::Edit,
- },
+ }),
event_stream,
cx,
)
@@ -5811,11 +6158,11 @@ async fn test_edit_file_tool_allow_still_prompts_for_local_settings(cx: &mut Tes
let (event_stream, mut rx) = crate::ToolCallEventStream::test();
let _task = cx.update(|cx| {
tool.run(
- crate::EditFileToolInput {
+ ToolInput::resolved(crate::EditFileToolInput {
display_description: "Edit local settings".to_string(),
path: "root/.zed/settings.json".into(),
mode: crate::EditFileMode::Edit,
- },
+ }),
event_stream,
cx,
)
@@ -5855,7 +6202,7 @@ async fn test_fetch_tool_deny_rule_blocks_url(cx: &mut TestAppContext) {
let input: crate::FetchToolInput =
serde_json::from_value(json!({"url": "https://internal.company.com/api"})).unwrap();
- let task = cx.update(|cx| tool.run(input, event_stream, cx));
+ let task = cx.update(|cx| tool.run(ToolInput::resolved(input), event_stream, cx));
let result = task.await;
assert!(result.is_err(), "expected fetch to be blocked");
@@ -5893,7 +6240,7 @@ async fn test_fetch_tool_allow_rule_skips_confirmation(cx: &mut TestAppContext)
let input: crate::FetchToolInput =
serde_json::from_value(json!({"url": "https://docs.rs/some-crate"})).unwrap();
- let _task = cx.update(|cx| tool.run(input, event_stream, cx));
+ let _task = cx.update(|cx| tool.run(ToolInput::resolved(input), event_stream, cx));
cx.run_until_parked();
diff --git a/crates/agent/src/tests/test_tools.rs b/crates/agent/src/tests/test_tools.rs
index 0ed2eef90271538c575cc84b56a28df106e4bd41..ac179c590a93824813afa338d9deed16b4d00ebd 100644
--- a/crates/agent/src/tests/test_tools.rs
+++ b/crates/agent/src/tests/test_tools.rs
@@ -3,6 +3,57 @@ use agent_settings::AgentSettings;
use gpui::{App, SharedString, Task};
use std::future;
use std::sync::atomic::{AtomicBool, Ordering};
+use std::time::Duration;
+
+/// A streaming tool that echoes its input, used to test streaming tool
+/// lifecycle (e.g. partial delivery and cleanup when the LLM stream ends
+/// before `is_input_complete`).
+#[derive(JsonSchema, Serialize, Deserialize)]
+pub struct StreamingEchoToolInput {
+ /// The text to echo.
+ pub text: String,
+}
+
+pub struct StreamingEchoTool;
+
+impl AgentTool for StreamingEchoTool {
+ type Input = StreamingEchoToolInput;
+ type Output = String;
+
+ const NAME: &'static str = "streaming_echo";
+
+ fn supports_input_streaming() -> bool {
+ true
+ }
+
+ fn kind() -> acp::ToolKind {
+ acp::ToolKind::Other
+ }
+
+ fn initial_title(
+ &self,
+ _input: Result,
+ _cx: &mut App,
+ ) -> SharedString {
+ "Streaming Echo".into()
+ }
+
+ fn run(
+ self: Arc,
+ mut input: ToolInput,
+ _event_stream: ToolCallEventStream,
+ cx: &mut App,
+ ) -> Task> {
+ cx.spawn(async move |_cx| {
+ while input.recv_partial().await.is_some() {}
+ let input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
+ Ok(input.text)
+ })
+ }
+}
/// A tool that echoes its input
#[derive(JsonSchema, Serialize, Deserialize)]
@@ -33,11 +84,17 @@ impl AgentTool for EchoTool {
fn run(
self: Arc,
- input: Self::Input,
+ input: ToolInput,
_event_stream: ToolCallEventStream,
- _cx: &mut App,
+ cx: &mut App,
) -> Task> {
- Task::ready(Ok(input.text))
+ cx.spawn(async move |_cx| {
+ let input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
+ Ok(input.text)
+ })
}
}
@@ -74,7 +131,7 @@ impl AgentTool for DelayTool {
fn run(
self: Arc,
- input: Self::Input,
+ input: ToolInput,
_event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task>
@@ -83,6 +140,10 @@ impl AgentTool for DelayTool {
{
let executor = cx.background_executor().clone();
cx.foreground_executor().spawn(async move {
+ let input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
executor.timer(Duration::from_millis(input.ms)).await;
Ok("Ding".to_string())
})
@@ -114,28 +175,38 @@ impl AgentTool for ToolRequiringPermission {
fn run(
self: Arc,
- _input: Self::Input,
+ input: ToolInput,
event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task> {
- let settings = AgentSettings::get_global(cx);
- let decision = decide_permission_from_settings(Self::NAME, &[String::new()], settings);
-
- let authorize = match decision {
- ToolPermissionDecision::Allow => None,
- ToolPermissionDecision::Deny(reason) => {
- return Task::ready(Err(reason));
- }
- ToolPermissionDecision::Confirm => {
- let context = crate::ToolPermissionContext::new(
- "tool_requiring_permission",
- vec![String::new()],
- );
- Some(event_stream.authorize("Authorize?", context, cx))
- }
- };
+ cx.spawn(async move |cx| {
+ let _input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
+
+ let decision = cx.update(|cx| {
+ decide_permission_from_settings(
+ Self::NAME,
+ &[String::new()],
+ AgentSettings::get_global(cx),
+ )
+ });
+
+ let authorize = match decision {
+ ToolPermissionDecision::Allow => None,
+ ToolPermissionDecision::Deny(reason) => {
+ return Err(reason);
+ }
+ ToolPermissionDecision::Confirm => Some(cx.update(|cx| {
+ let context = crate::ToolPermissionContext::new(
+ "tool_requiring_permission",
+ vec![String::new()],
+ );
+ event_stream.authorize("Authorize?", context, cx)
+ })),
+ };
- cx.foreground_executor().spawn(async move {
if let Some(authorize) = authorize {
authorize.await.map_err(|e| e.to_string())?;
}
@@ -169,11 +240,15 @@ impl AgentTool for InfiniteTool {
fn run(
self: Arc,
- _input: Self::Input,
+ input: ToolInput,
_event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task> {
cx.foreground_executor().spawn(async move {
+ let _input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
future::pending::<()>().await;
unreachable!()
})
@@ -221,11 +296,15 @@ impl AgentTool for CancellationAwareTool {
fn run(
self: Arc,
- _input: Self::Input,
+ input: ToolInput,
event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task> {
cx.foreground_executor().spawn(async move {
+ let _input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
// Wait for cancellation - this tool does nothing but wait to be cancelled
event_stream.cancelled_by_user().await;
self.was_cancelled.store(true, Ordering::SeqCst);
@@ -276,10 +355,16 @@ impl AgentTool for WordListTool {
fn run(
self: Arc,
- _input: Self::Input,
+ input: ToolInput,
_event_stream: ToolCallEventStream,
- _cx: &mut App,
+ cx: &mut App,
) -> Task> {
- Task::ready(Ok("ok".to_string()))
+ cx.spawn(async move |_cx| {
+ let _input = input
+ .recv()
+ .await
+ .map_err(|e| format!("Failed to receive tool input: {e}"))?;
+ Ok("ok".to_string())
+ })
}
}
diff --git a/crates/agent/src/thread.rs b/crates/agent/src/thread.rs
index 5d4de36cb69335de7a77eb7ad7a15f75b8e2b0b7..73102929ac58caaf96b06e6ab74ded698cbe86e3 100644
--- a/crates/agent/src/thread.rs
+++ b/crates/agent/src/thread.rs
@@ -1,16 +1,14 @@
use crate::{
- AgentGitWorktreeInfo, ContextServerRegistry, CopyPathTool, CreateDirectoryTool,
- DbLanguageModel, DbThread, DeletePathTool, DiagnosticsTool, EditFileTool, FetchTool,
- FindPathTool, GrepTool, ListDirectoryTool, MovePathTool, NowTool, OpenTool, ProjectSnapshot,
- ReadFileTool, RestoreFileFromDiskTool, SaveFileTool, SpawnAgentTool, StreamingEditFileTool,
+ ContextServerRegistry, CopyPathTool, CreateDirectoryTool, DbLanguageModel, DbThread,
+ DeletePathTool, DiagnosticsTool, EditFileTool, FetchTool, FindPathTool, GrepTool,
+ ListDirectoryTool, MovePathTool, NowTool, OpenTool, ProjectSnapshot, ReadFileTool,
+ RestoreFileFromDiskTool, SaveFileTool, SpawnAgentTool, StreamingEditFileTool,
SystemPromptTemplate, Template, Templates, TerminalTool, ToolPermissionDecision, WebSearchTool,
decide_permission_from_settings,
};
use acp_thread::{MentionUri, UserMessageId};
use action_log::ActionLog;
-use feature_flags::{
- FeatureFlagAppExt as _, StreamingEditFileToolFeatureFlag, SubagentsFeatureFlag,
-};
+use feature_flags::{FeatureFlagAppExt as _, StreamingEditFileToolFeatureFlag};
use agent_client_protocol as acp;
use agent_settings::{
@@ -40,16 +38,19 @@ use language_model::{
LanguageModelImage, LanguageModelProviderId, LanguageModelRegistry, LanguageModelRequest,
LanguageModelRequestMessage, LanguageModelRequestTool, LanguageModelToolResult,
LanguageModelToolResultContent, LanguageModelToolSchemaFormat, LanguageModelToolUse,
- LanguageModelToolUseId, Role, SelectedModel, StopReason, TokenUsage, ZED_CLOUD_PROVIDER_ID,
+ LanguageModelToolUseId, Role, SelectedModel, Speed, StopReason, TokenUsage,
+ ZED_CLOUD_PROVIDER_ID,
};
use project::Project;
use prompt_store::ProjectContext;
use schemars::{JsonSchema, Schema};
+use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use settings::{LanguageModelSelection, Settings, ToolPermissionMode, update_settings_file};
use smol::stream::StreamExt;
use std::{
collections::BTreeMap,
+ marker::PhantomData,
ops::RangeInclusive,
path::Path,
rc::Rc,
@@ -602,8 +603,13 @@ pub trait TerminalHandle {
}
pub trait SubagentHandle {
+ /// The session ID of this subagent thread
fn id(&self) -> acp::SessionId;
- fn wait_for_output(&self, cx: &AsyncApp) -> Task>;
+ /// The current number of entries in the thread.
+ /// Useful for knowing where the next turn will begin
+ fn num_entries(&self, cx: &App) -> usize;
+ /// Runs a turn for a given message and returns both the response and the index of that output message.
+ fn send(&self, message: String, cx: &AsyncApp) -> Task>;
}
pub trait ThreadEnvironment {
@@ -615,19 +621,11 @@ pub trait ThreadEnvironment {
cx: &mut AsyncApp,
) -> Task>>;
- fn create_subagent(
- &self,
- parent_thread: Entity,
- label: String,
- initial_prompt: String,
- cx: &mut App,
- ) -> Result>;
+ fn create_subagent(&self, label: String, cx: &mut App) -> Result>;
fn resume_subagent(
&self,
- _parent_thread: Entity,
_session_id: acp::SessionId,
- _follow_up_prompt: String,
_cx: &mut App,
) -> Result> {
Err(anyhow::anyhow!(
@@ -890,20 +888,20 @@ pub struct Thread {
summarization_model: Option>,
thinking_enabled: bool,
thinking_effort: Option,
+ speed: Option,
prompt_capabilities_tx: watch::Sender,
pub(crate) prompt_capabilities_rx: watch::Receiver,
pub(crate) project: Entity,
pub(crate) action_log: Entity,
- /// Tracks the last time files were read by the agent, to detect external modifications
- pub(crate) file_read_times: HashMap,
/// True if this thread was imported from a shared thread and can be synced.
imported: bool,
/// If this is a subagent thread, contains context about the parent
subagent_context: Option,
+ /// The user's unsent prompt text, persisted so it can be restored when reloading the thread.
+ draft_prompt: Option>,
+ ui_scroll_position: Option,
/// Weak references to running subagent threads for cancellation propagation
running_subagents: Vec>,
- /// Git worktree info if this thread is running in an agent worktree.
- git_worktree_info: Option,
}
impl Thread {
@@ -920,12 +918,16 @@ impl Thread {
let context_server_registry = parent_thread.read(cx).context_server_registry.clone();
let templates = parent_thread.read(cx).templates.clone();
let model = parent_thread.read(cx).model().cloned();
- let mut thread = Self::new(
+ let parent_action_log = parent_thread.read(cx).action_log().clone();
+ let action_log =
+ cx.new(|_cx| ActionLog::new(project.clone()).with_linked_action_log(parent_action_log));
+ let mut thread = Self::new_internal(
project,
project_context,
context_server_registry,
templates,
model,
+ action_log,
cx,
);
thread.subagent_context = Some(SubagentContext {
@@ -942,6 +944,26 @@ impl Thread {
templates: Arc,
model: Option>,
cx: &mut Context,
+ ) -> Self {
+ Self::new_internal(
+ project.clone(),
+ project_context,
+ context_server_registry,
+ templates,
+ model,
+ cx.new(|_cx| ActionLog::new(project)),
+ cx,
+ )
+ }
+
+ fn new_internal(
+ project: Entity,
+ project_context: Entity,
+ context_server_registry: Entity,
+ templates: Arc,
+ model: Option>,
+ action_log: Entity,
+ cx: &mut Context,
) -> Self {
let settings = AgentSettings::get_global(cx);
let profile_id = settings.default_profile.clone();
@@ -953,7 +975,6 @@ impl Thread {
.default_model
.as_ref()
.and_then(|model| model.effort.clone());
- let action_log = cx.new(|_cx| ActionLog::new(project.clone()));
let (prompt_capabilities_tx, prompt_capabilities_rx) =
watch::channel(Self::prompt_capabilities(model.as_deref()));
Self {
@@ -985,16 +1006,17 @@ impl Thread {
model,
summarization_model: None,
thinking_enabled: enable_thinking,
+ speed: None,
thinking_effort,
prompt_capabilities_tx,
prompt_capabilities_rx,
project,
action_log,
- file_read_times: HashMap::default(),
imported: false,
subagent_context: None,
+ draft_prompt: None,
+ ui_scroll_position: None,
running_subagents: Vec::new(),
- git_worktree_info: None,
}
}
@@ -1143,10 +1165,6 @@ impl Thread {
let profile_id = db_thread
.profile
.unwrap_or_else(|| settings.default_profile.clone());
- let thinking_effort = settings
- .default_model
- .as_ref()
- .and_then(|model| model.effort.clone());
let mut model = LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
db_thread
@@ -1175,12 +1193,6 @@ impl Thread {
watch::channel(Self::prompt_capabilities(model.as_deref()));
let action_log = cx.new(|_| ActionLog::new(project.clone()));
- // TODO: We should serialize the user's configured thinking parameter on `DbThread`
- // rather than deriving it from the model's capability. A user may have explicitly
- // toggled thinking off for a model that supports it, and we'd lose that preference here.
- let enable_thinking = model
- .as_deref()
- .is_some_and(|model| model.supports_thinking());
Self {
id,
@@ -1208,18 +1220,22 @@ impl Thread {
templates,
model,
summarization_model: None,
- thinking_enabled: enable_thinking,
- thinking_effort,
+ thinking_enabled: db_thread.thinking_enabled,
+ thinking_effort: db_thread.thinking_effort,
+ speed: db_thread.speed,
project,
action_log,
updated_at: db_thread.updated_at,
prompt_capabilities_tx,
prompt_capabilities_rx,
- file_read_times: HashMap::default(),
imported: db_thread.imported,
subagent_context: db_thread.subagent_context,
+ draft_prompt: db_thread.draft_prompt,
+ ui_scroll_position: db_thread.ui_scroll_position.map(|sp| gpui::ListOffset {
+ item_ix: sp.item_ix,
+ offset_in_item: gpui::px(sp.offset_in_item),
+ }),
running_subagents: Vec::new(),
- git_worktree_info: db_thread.git_worktree_info,
}
}
@@ -1240,7 +1256,16 @@ impl Thread {
profile: Some(self.profile_id.clone()),
imported: self.imported,
subagent_context: self.subagent_context.clone(),
- git_worktree_info: self.git_worktree_info.clone(),
+ speed: self.speed,
+ thinking_enabled: self.thinking_enabled,
+ thinking_effort: self.thinking_effort.clone(),
+ draft_prompt: self.draft_prompt.clone(),
+ ui_scroll_position: self.ui_scroll_position.map(|lo| {
+ crate::db::SerializedScrollPosition {
+ item_ix: lo.item_ix,
+ offset_in_item: lo.offset_in_item.as_f32(),
+ }
+ }),
};
cx.background_spawn(async move {
@@ -1282,19 +1307,42 @@ impl Thread {
self.messages.is_empty() && self.title.is_none()
}
+ pub fn draft_prompt(&self) -> Option<&[acp::ContentBlock]> {
+ self.draft_prompt.as_deref()
+ }
+
+ pub fn set_draft_prompt(&mut self, prompt: Option>) {
+ self.draft_prompt = prompt;
+ }
+
+ pub fn ui_scroll_position(&self) -> Option {
+ self.ui_scroll_position
+ }
+
+ pub fn set_ui_scroll_position(&mut self, position: Option) {
+ self.ui_scroll_position = position;
+ }
+
pub fn model(&self) -> Option<&Arc> {
self.model.as_ref()
}
pub fn set_model(&mut self, model: Arc, cx: &mut Context) {
let old_usage = self.latest_token_usage();
- self.model = Some(model);
+ self.model = Some(model.clone());
let new_caps = Self::prompt_capabilities(self.model.as_deref());
let new_usage = self.latest_token_usage();
if old_usage != new_usage {
cx.emit(TokenUsageUpdated(new_usage));
}
self.prompt_capabilities_tx.send(new_caps).log_err();
+
+ for subagent in &self.running_subagents {
+ subagent
+ .update(cx, |thread, cx| thread.set_model(model.clone(), cx))
+ .ok();
+ }
+
cx.notify()
}
@@ -1307,7 +1355,15 @@ impl Thread {
model: Option>,
cx: &mut Context,
) {
- self.summarization_model = model;
+ self.summarization_model = model.clone();
+
+ for subagent in &self.running_subagents {
+ subagent
+ .update(cx, |thread, cx| {
+ thread.set_summarization_model(model.clone(), cx)
+ })
+ .ok();
+ }
cx.notify()
}
@@ -1317,6 +1373,12 @@ impl Thread {
pub fn set_thinking_enabled(&mut self, enabled: bool, cx: &mut Context) {
self.thinking_enabled = enabled;
+
+ for subagent in &self.running_subagents {
+ subagent
+ .update(cx, |thread, cx| thread.set_thinking_enabled(enabled, cx))
+ .ok();
+ }
cx.notify();
}
@@ -1325,11 +1387,39 @@ impl Thread {
}
pub fn set_thinking_effort(&mut self, effort: Option, cx: &mut Context) {
- self.thinking_effort = effort;
+ self.thinking_effort = effort.clone();
+
+ for subagent in &self.running_subagents {
+ subagent
+ .update(cx, |thread, cx| {
+ thread.set_thinking_effort(effort.clone(), cx)
+ })
+ .ok();
+ }
cx.notify();
}
- pub fn last_message(&self) -> Option {
+ pub fn speed(&self) -> Option {
+ self.speed
+ }
+
+ pub fn set_speed(&mut self, speed: Speed, cx: &mut Context) {
+ self.speed = Some(speed);
+
+ for subagent in &self.running_subagents {
+ subagent
+ .update(cx, |thread, cx| thread.set_speed(speed, cx))
+ .ok();
+ }
+ cx.notify();
+ }
+
+ pub fn last_message(&self) -> Option<&Message> {
+ self.messages.last()
+ }
+
+ #[cfg(any(test, feature = "test-support"))]
+ pub fn last_received_or_pending_message(&self) -> Option {
if let Some(message) = self.pending_message.clone() {
Some(Message::Agent(message))
} else {
@@ -1342,6 +1432,9 @@ impl Thread {
environment: Rc,
cx: &mut Context,
) {
+ // Only update the agent location for the root thread, not for subagents.
+ let update_agent_location = self.parent_thread_id().is_none();
+
let language_registry = self.project.read(cx).languages().clone();
self.add_tool(CopyPathTool::new(self.project.clone()));
self.add_tool(CreateDirectoryTool::new(self.project.clone()));
@@ -1359,8 +1452,8 @@ impl Thread {
self.add_tool(StreamingEditFileTool::new(
self.project.clone(),
cx.weak_entity(),
+ self.action_log.clone(),
language_registry,
- Templates::new(),
));
self.add_tool(FetchTool::new(self.project.read(cx).client().http_client()));
self.add_tool(FindPathTool::new(self.project.clone()));
@@ -1370,17 +1463,17 @@ impl Thread {
self.add_tool(NowTool);
self.add_tool(OpenTool::new(self.project.clone()));
self.add_tool(ReadFileTool::new(
- cx.weak_entity(),
self.project.clone(),
self.action_log.clone(),
+ update_agent_location,
));
self.add_tool(SaveFileTool::new(self.project.clone()));
self.add_tool(RestoreFileFromDiskTool::new(self.project.clone()));
self.add_tool(TerminalTool::new(self.project.clone(), environment.clone()));
self.add_tool(WebSearchTool);
- if cx.has_flag::() && self.depth() < MAX_SUBAGENT_DEPTH {
- self.add_tool(SpawnAgentTool::new(cx.weak_entity(), environment));
+ if self.depth() < MAX_SUBAGENT_DEPTH {
+ self.add_tool(SpawnAgentTool::new(environment));
}
}
@@ -1393,6 +1486,7 @@ impl Thread {
self.tools.insert(T::NAME.into(), tool.erase());
}
+ #[cfg(any(test, feature = "test-support"))]
pub fn remove_tool(&mut self, name: &str) -> bool {
self.tools.remove(name).is_some()
}
@@ -1406,12 +1500,18 @@ impl Thread {
return;
}
- self.profile_id = profile_id;
+ self.profile_id = profile_id.clone();
// Swap to the profile's preferred model when available.
if let Some(model) = Self::resolve_profile_model(&self.profile_id, cx) {
self.set_model(model, cx);
}
+
+ for subagent in &self.running_subagents {
+ subagent
+ .update(cx, |thread, cx| thread.set_profile(profile_id.clone(), cx))
+ .ok();
+ }
}
pub fn cancel(&mut self, cx: &mut Context) -> Task<()> {
@@ -1664,6 +1764,7 @@ impl Thread {
event_stream: event_stream.clone(),
tools: self.enabled_tools(profile, &model, cx),
cancellation_tx,
+ streaming_tool_inputs: HashMap::default(),
_task: cx.spawn(async move |this, cx| {
log::debug!("Starting agent turn execution");
@@ -1730,6 +1831,9 @@ impl Thread {
telemetry::event!(
"Agent Thread Completion",
thread_id = this.read_with(cx, |this, _| this.id.to_string())?,
+ parent_thread_id = this.read_with(cx, |this, _| this
+ .parent_thread_id()
+ .map(|id| id.to_string()))?,
prompt_id = this.read_with(cx, |this, _| this.prompt_id.to_string())?,
model = model.telemetry_id(),
model_provider = model.provider_id().to_string(),
@@ -1814,6 +1918,19 @@ impl Thread {
// that need their own permits.
drop(events);
+ // Drop streaming tool input senders that never received their final input.
+ // This prevents deadlock when the LLM stream ends (e.g. because of an error)
+ // before sending a tool use with `is_input_complete: true`.
+ this.update(cx, |this, _cx| {
+ if let Some(running_turn) = this.running_turn.as_mut() {
+ if running_turn.streaming_tool_inputs.is_empty() {
+ return;
+ }
+ log::warn!("Dropping partial tool inputs because the stream ended");
+ running_turn.streaming_tool_inputs.drain();
+ }
+ })?;
+
let end_turn = tool_results.is_empty();
while let Some(tool_result) = tool_results.next().await {
log::debug!("Tool finished {:?}", tool_result);
@@ -1856,7 +1973,15 @@ impl Thread {
})??;
let timer = cx.background_executor().timer(retry.duration);
event_stream.send_retry(retry);
- timer.await;
+ futures::select! {
+ _ = timer.fuse() => {}
+ _ = cancellation_rx.changed().fuse() => {
+ if *cancellation_rx.borrow() {
+ log::debug!("Turn cancelled during retry delay, exiting");
+ return Ok(());
+ }
+ }
+ }
this.update(cx, |this, _cx| {
if let Some(Message::Agent(message)) = this.messages.last() {
if message.tool_results.is_empty() {
@@ -1988,6 +2113,7 @@ impl Thread {
telemetry::event!(
"Agent Thread Completion Usage Updated",
thread_id = self.id.to_string(),
+ parent_thread_id = self.parent_thread_id().map(|id| id.to_string()),
prompt_id = self.prompt_id.to_string(),
model = self.model.as_ref().map(|m| m.telemetry_id()),
model_provider = self.model.as_ref().map(|m| m.provider_id().to_string()),
@@ -2068,10 +2194,6 @@ impl Thread {
self.send_or_update_tool_use(&tool_use, title, kind, event_stream);
- if !tool_use.is_input_complete {
- return None;
- }
-
let Some(tool) = tool else {
let content = format!("No tool named {} exists", tool_use.name);
return Some(Task::ready(LanguageModelToolResult {
@@ -2083,9 +2205,72 @@ impl Thread {
}));
};
+ if !tool_use.is_input_complete {
+ if tool.supports_input_streaming() {
+ let running_turn = self.running_turn.as_mut()?;
+ if let Some(sender) = running_turn.streaming_tool_inputs.get(&tool_use.id) {
+ sender.send_partial(tool_use.input);
+ return None;
+ }
+
+ let (sender, tool_input) = ToolInputSender::channel();
+ sender.send_partial(tool_use.input);
+ running_turn
+ .streaming_tool_inputs
+ .insert(tool_use.id.clone(), sender);
+
+ let tool = tool.clone();
+ log::debug!("Running streaming tool {}", tool_use.name);
+ return Some(self.run_tool(
+ tool,
+ tool_input,
+ tool_use.id,
+ tool_use.name,
+ event_stream,
+ cancellation_rx,
+ cx,
+ ));
+ } else {
+ return None;
+ }
+ }
+
+ if let Some(sender) = self
+ .running_turn
+ .as_mut()?
+ .streaming_tool_inputs
+ .remove(&tool_use.id)
+ {
+ sender.send_final(tool_use.input);
+ return None;
+ }
+
+ log::debug!("Running tool {}", tool_use.name);
+ let tool_input = ToolInput::ready(tool_use.input);
+ Some(self.run_tool(
+ tool,
+ tool_input,
+ tool_use.id,
+ tool_use.name,
+ event_stream,
+ cancellation_rx,
+ cx,
+ ))
+ }
+
+ fn run_tool(
+ &self,
+ tool: Arc,
+ tool_input: ToolInput,
+ tool_use_id: LanguageModelToolUseId,
+ tool_name: Arc,
+ event_stream: &ThreadEventStream,
+ cancellation_rx: watch::Receiver,
+ cx: &mut Context,
+ ) -> Task {
let fs = self.project.read(cx).fs().clone();
let tool_event_stream = ToolCallEventStream::new(
- tool_use.id.clone(),
+ tool_use_id.clone(),
event_stream.clone(),
Some(fs),
cancellation_rx,
@@ -2094,9 +2279,8 @@ impl Thread {
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress),
);
let supports_images = self.model().is_some_and(|model| model.supports_images());
- let tool_result = tool.run(tool_use.input, tool_event_stream, cx);
- log::debug!("Running tool {}", tool_use.name);
- Some(cx.foreground_executor().spawn(async move {
+ let tool_result = tool.run(tool_input, tool_event_stream, cx);
+ cx.foreground_executor().spawn(async move {
let (is_error, output) = match tool_result.await {
Ok(mut output) => {
if let LanguageModelToolResultContent::Image(_) = &output.llm_output
@@ -2114,13 +2298,13 @@ impl Thread {
};
LanguageModelToolResult {
- tool_use_id: tool_use.id,
- tool_name: tool_use.name,
+ tool_use_id,
+ tool_name,
is_error,
content: output.llm_output,
output: Some(output.raw_output),
}
- }))
+ })
}
fn handle_tool_use_json_parse_error_event(
@@ -2165,20 +2349,18 @@ impl Thread {
) {
// Ensure the last message ends in the current tool use
let last_message = self.pending_message();
- let push_new_tool_use = last_message.content.last_mut().is_none_or(|content| {
+
+ let has_tool_use = last_message.content.iter_mut().rev().any(|content| {
if let AgentMessageContent::ToolUse(last_tool_use) = content {
if last_tool_use.id == tool_use.id {
*last_tool_use = tool_use.clone();
- false
- } else {
- true
+ return true;
}
- } else {
- true
}
+ false
});
- if push_new_tool_use {
+ if !has_tool_use {
event_stream.send_tool_call(
&tool_use.id,
&tool_use.name,
@@ -2321,7 +2503,12 @@ impl Thread {
anyhow::Ok(())
};
- if generate.await.context("failed to generate title").is_ok() {
+ if generate
+ .await
+ .context("failed to generate thread title")
+ .log_err()
+ .is_some()
+ {
_ = this.update(cx, |this, cx| this.set_title(title.into(), cx));
}
_ = this.update(cx, |this, _| this.pending_title_generation = None);
@@ -2406,6 +2593,7 @@ impl Thread {
name: tool_name.to_string(),
description: tool.description().to_string(),
input_schema: tool.input_schema(model.tool_input_format()).log_err()?,
+ use_input_streaming: tool.supports_input_streaming(),
})
})
.collect::>()
@@ -2437,6 +2625,7 @@ impl Thread {
temperature: AgentSettings::temperature_for_model(model, cx),
thinking_allowed: self.thinking_enabled,
thinking_effort: self.thinking_effort.clone(),
+ speed: self.speed(),
};
log::debug!("Completion request built successfully");
@@ -2459,7 +2648,8 @@ impl Thread {
}
}
- let use_streaming_edit_tool = cx.has_flag::();
+ let use_streaming_edit_tool =
+ cx.has_flag::() && model.supports_streaming_tools();
let mut tools = self
.tools
@@ -2776,6 +2966,9 @@ struct RunningTurn {
/// Sender to signal tool cancellation. When cancel is called, this is
/// set to true so all tools can detect user-initiated cancellation.
cancellation_tx: watch::Sender,
+ /// Senders for tools that support input streaming and have already been
+ /// started but are still receiving input from the LLM.
+ streaming_tool_inputs: HashMap,
}
impl RunningTurn {
@@ -2795,6 +2988,103 @@ pub struct TitleUpdated;
impl EventEmitter for Thread {}
+/// A channel-based wrapper that delivers tool input to a running tool.
+///
+/// For non-streaming tools, created via `ToolInput::ready()` so `.recv()` resolves immediately.
+/// For streaming tools, partial JSON snapshots arrive via `.recv_partial()` as the LLM streams
+/// them, followed by the final complete input available through `.recv()`.
+pub struct ToolInput {
+ partial_rx: mpsc::UnboundedReceiver,
+ final_rx: oneshot::Receiver,
+ _phantom: PhantomData,
+}
+
+impl ToolInput {
+ #[cfg(any(test, feature = "test-support"))]
+ pub fn resolved(input: impl Serialize) -> Self {
+ let value = serde_json::to_value(input).expect("failed to serialize tool input");
+ Self::ready(value)
+ }
+
+ pub fn ready(value: serde_json::Value) -> Self {
+ let (partial_tx, partial_rx) = mpsc::unbounded();
+ drop(partial_tx);
+ let (final_tx, final_rx) = oneshot::channel();
+ final_tx.send(value).ok();
+ Self {
+ partial_rx,
+ final_rx,
+ _phantom: PhantomData,
+ }
+ }
+
+ #[cfg(any(test, feature = "test-support"))]
+ pub fn test() -> (ToolInputSender, Self) {
+ let (sender, input) = ToolInputSender::channel();
+ (sender, input.cast())
+ }
+
+ /// Wait for the final deserialized input, ignoring all partial updates.
+ /// Non-streaming tools can use this to wait until the whole input is available.
+ pub async fn recv(mut self) -> Result {
+ // Drain any remaining partials
+ while self.partial_rx.next().await.is_some() {}
+ let value = self
+ .final_rx
+ .await
+ .map_err(|_| anyhow!("tool input was not fully received"))?;
+ serde_json::from_value(value).map_err(Into::into)
+ }
+
+ /// Returns the next partial JSON snapshot, or `None` when input is complete.
+ /// Once this returns `None`, call `recv()` to get the final input.
+ pub async fn recv_partial(&mut self) -> Option {
+ self.partial_rx.next().await
+ }
+
+ fn cast(self) -> ToolInput {
+ ToolInput {
+ partial_rx: self.partial_rx,
+ final_rx: self.final_rx,
+ _phantom: PhantomData,
+ }
+ }
+}
+
+pub struct ToolInputSender {
+ partial_tx: mpsc::UnboundedSender,
+ final_tx: Option>,
+}
+
+impl ToolInputSender {
+ pub(crate) fn channel() -> (Self, ToolInput) {
+ let (partial_tx, partial_rx) = mpsc::unbounded();
+ let (final_tx, final_rx) = oneshot::channel();
+ let sender = Self {
+ partial_tx,
+ final_tx: Some(final_tx),
+ };
+ let input = ToolInput {
+ partial_rx,
+ final_rx,
+ _phantom: PhantomData,
+ };
+ (sender, input)
+ }
+
+ pub(crate) fn send_partial(&self, value: serde_json::Value) {
+ self.partial_tx.unbounded_send(value).ok();
+ }
+
+ pub(crate) fn send_final(mut self, value: serde_json::Value) {
+ // Close the partial channel so recv_partial() returns None
+ self.partial_tx.close_channel();
+ if let Some(final_tx) = self.final_tx.take() {
+ final_tx.send(value).ok();
+ }
+ }
+}
+
pub trait AgentTool
where
Self: 'static + Sized,
@@ -2828,6 +3118,11 @@ where
language_model::tool_schema::root_schema_for::(format)
}
+ /// Returns whether the tool supports streaming of tool use parameters.
+ fn supports_input_streaming() -> bool {
+ false
+ }
+
/// Some tools rely on a provider for the underlying billing or other reasons.
/// Allow the tool to check if they are compatible, or should be filtered out.
fn supports_provider(_provider: &LanguageModelProviderId) -> bool {
@@ -2843,7 +3138,7 @@ where
/// still signaling whether the invocation succeeded or failed.
fn run(
self: Arc,
- input: Self::Input,
+ input: ToolInput,
event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task