1# Language Model Provider Extensions - Implementation Guide
2
3## Purpose
4
5This document provides a detailed guide for completing the implementation of Language Model Provider Extensions in Zed. It explains what has been done, what remains, and how to complete the work.
6
7For the full design and rationale, see [language_model_provider_extensions_plan.md](./language_model_provider_extensions_plan.md).
8
9## Core Design Principle
10
11**Extensions handle ALL provider-specific logic.** This means:
12- Thought signatures (Anthropic)
13- Reasoning effort parameters (OpenAI o-series)
14- Cache control markers
15- Parallel tool calls
16- SSE/streaming format parsing
17- Any other provider-specific features
18
19Zed's core should have **zero knowledge** of these details. The extension API must be generic enough that extensions can implement any provider without Zed changes.
20
21---
22
23## Current Status: STREAMING API COMPLETE ✅
24
25The core plumbing and streaming API are now complete. Extensions can:
261. Declare LLM providers in their manifest
272. Be queried for providers and models at load time
283. Have their providers registered with the `LanguageModelRegistry`
294. Have their providers unregistered when the extension is unloaded
305. Stream completions using the new polling-based API
31
32**What's NOT done yet:**
33- Credential UI prompt support (`llm_request_credential` returns false)
34- Model refresh mechanism
35- A working test extension that demonstrates the feature (requires WASM build)
36- End-to-end testing with a real extension
37
38---
39
40## What Has Been Completed
41
42### 1. WIT Interface Definition ✅
43
44**Location:** `crates/extension_api/wit/since_v0.7.0/`
45
46Created all WIT files for v0.7.0:
47- `llm-provider.wit` - Core LLM types (ProviderInfo, ModelInfo, CompletionRequest, CompletionEvent, etc.)
48- `extension.wit` - Updated with LLM exports/imports
49
50Key types in `llm-provider.wit`:
51```wit
52record provider-info {
53 id: string,
54 name: string,
55 icon: option<string>,
56}
57
58record model-info {
59 id: string,
60 name: string,
61 max-token-count: u64,
62 max-output-tokens: option<u64>,
63 capabilities: model-capabilities,
64 is-default: bool,
65 is-default-fast: bool,
66}
67
68variant completion-event {
69 started,
70 text(string),
71 thinking(thinking-content),
72 redacted-thinking(string),
73 tool-use(tool-use),
74 tool-use-json-parse-error(tool-use-json-parse-error),
75 stop(stop-reason),
76 usage(token-usage),
77 reasoning-details(string),
78}
79```
80
81Key exports in `extension.wit`:
82```wit
83export llm-providers: func() -> list<provider-info>;
84export llm-provider-models: func(provider-id: string) -> result<list<model-info>, string>;
85export llm-provider-is-authenticated: func(provider-id: string) -> bool;
86export llm-provider-authenticate: func(provider-id: string) -> result<_, string>;
87export llm-stream-completion-start: func(provider-id: string, model-id: string, request: completion-request) -> result<string, string>;
88export llm-stream-completion-next: func(stream-id: string) -> result<option<completion-event>, string>;
89export llm-stream-completion-close: func(stream-id: string);
90```
91
92Note: The streaming API uses a polling-based approach with explicit stream IDs instead of a resource handle.
93This avoids complexity with cross-boundary resource ownership in the WASM component model.
94
95Key imports in `extension.wit`:
96```wit
97import llm-get-credential: func(provider-id: string) -> option<string>;
98import llm-store-credential: func(provider-id: string, value: string) -> result<_, string>;
99import llm-delete-credential: func(provider-id: string) -> result<_, string>;
100import llm-get-env-var: func(name: string) -> option<string>;
101```
102
103### 2. Extension Manifest Changes ✅
104
105**Location:** `crates/extension/src/extension_manifest.rs`
106
107Added these types:
108```rust
109pub struct LanguageModelProviderManifestEntry {
110 pub name: String,
111 pub icon: Option<String>,
112 pub models: Vec<LanguageModelManifestEntry>,
113 pub auth: Option<LanguageModelAuthConfig>,
114}
115
116pub struct LanguageModelManifestEntry {
117 pub id: String,
118 pub name: String,
119 pub max_token_count: u64,
120 pub max_output_tokens: Option<u64>,
121 pub supports_images: bool,
122 pub supports_tools: bool,
123 pub supports_thinking: bool,
124}
125
126pub struct LanguageModelAuthConfig {
127 pub env_var: Option<String>,
128 pub credential_label: Option<String>,
129}
130```
131
132Added to `ExtensionManifest`:
133```rust
134pub language_model_providers: BTreeMap<Arc<str>, LanguageModelProviderManifestEntry>,
135```
136
137### 3. Host-Side Provider/Model Structs ✅
138
139**Location:** `crates/extension_host/src/wasm_host/llm_provider.rs`
140
141Created `ExtensionLanguageModelProvider` implementing `LanguageModelProvider`:
142- Wraps a `WasmExtension` and `LlmProviderInfo`
143- Delegates to extension calls for authentication, model listing, etc.
144- Returns `ExtensionLanguageModel` instances
145- Implements `LanguageModelProviderState` for UI observation
146
147Created `ExtensionLanguageModel` implementing `LanguageModel`:
148- Wraps extension + model info
149- Implements `stream_completion` by calling extension's `llm-stream-completion`
150- Converts between Zed's `LanguageModelRequest` and WIT's `CompletionRequest`
151- Handles streaming via polling-based approach with explicit stream IDs
152
153**Key implementation details:**
154- The `stream_completion` method uses a polling loop that calls `llm_stream_completion_start`, then repeatedly calls `llm_stream_completion_next` until the stream is complete, and finally calls `llm_stream_completion_close` to clean up
155- Credential storage uses gpui's `cx.read_credentials()`, `cx.write_credentials()`, and `cx.delete_credentials()` APIs
156- The `new()` method now accepts a `models: Vec<LlmModelInfo>` parameter to populate available models at registration time
157
158### 4. Extension Host Proxy ✅
159
160**Location:** `crates/extension/src/extension_host_proxy.rs`
161
162Added `ExtensionLanguageModelProviderProxy` trait:
163```rust
164pub type LanguageModelProviderRegistration = Box<dyn FnOnce(&mut App) + Send + Sync + 'static>;
165
166pub trait ExtensionLanguageModelProviderProxy: Send + Sync + 'static {
167 fn register_language_model_provider(
168 &self,
169 provider_id: Arc<str>,
170 register_fn: LanguageModelProviderRegistration,
171 cx: &mut App,
172 );
173
174 fn unregister_language_model_provider(&self, provider_id: Arc<str>, cx: &mut App);
175}
176```
177
178The proxy uses a boxed closure pattern. This allows `extension_host` to create the `ExtensionLanguageModelProvider` (which requires `WasmExtension`), while letting `language_models` handle the actual registry registration.
179
180### 5. Proxy Implementation ✅
181
182**Location:** `crates/language_models/src/extension.rs`
183
184```rust
185pub struct ExtensionLanguageModelProxy {
186 registry: Entity<LanguageModelRegistry>,
187}
188
189impl ExtensionLanguageModelProviderProxy for ExtensionLanguageModelProxy {
190 fn register_language_model_provider(
191 &self,
192 _provider_id: Arc<str>,
193 register_fn: LanguageModelProviderRegistration,
194 cx: &mut App,
195 ) {
196 register_fn(cx);
197 }
198
199 fn unregister_language_model_provider(&self, provider_id: Arc<str>, cx: &mut App) {
200 self.registry.update(cx, |registry, cx| {
201 registry.unregister_provider(LanguageModelProviderId::from(provider_id), cx);
202 });
203 }
204}
205```
206
207The proxy is registered during `language_models::init()`.
208
209### 6. Extension Loading Wiring ✅
210
211**Location:** `crates/extension_host/src/extension_host.rs`
212
213In `extensions_updated()`:
214
215**Unloading (around line 1217):**
216```rust
217for provider_id in extension.manifest.language_model_providers.keys() {
218 let full_provider_id: Arc<str> = format!("{}:{}", extension_id, provider_id).into();
219 self.proxy.unregister_language_model_provider(full_provider_id, cx);
220}
221```
222
223**Loading (around line 1383):**
224After loading a wasm extension, we query for LLM providers and models:
225```rust
226if !extension.manifest.language_model_providers.is_empty() {
227 let providers_result = wasm_extension
228 .call(|ext, store| {
229 async move { ext.call_llm_providers(store).await }.boxed()
230 })
231 .await;
232
233 if let Ok(Ok(providers)) = providers_result {
234 for provider_info in providers {
235 // Query for models...
236 let models_result = wasm_extension.call(...).await;
237 // Store provider_info and models for registration
238 }
239 }
240}
241```
242
243Then during registration (around line 1511):
244```rust
245for (provider_info, models) in llm_providers_with_models {
246 let provider_id: Arc<str> = format!("{}:{}", manifest.id, provider_info.id).into();
247 this.proxy.register_language_model_provider(
248 provider_id,
249 Box::new(move |cx: &mut App| {
250 let provider = Arc::new(ExtensionLanguageModelProvider::new(
251 wasm_ext, pinfo, mods, cx,
252 ));
253 language_model::LanguageModelRegistry::global(cx).update(
254 cx,
255 |registry, cx| {
256 registry.register_provider(provider, cx);
257 },
258 );
259 }),
260 cx,
261 );
262}
263```
264
265### 7. Extension API Updates ✅
266
267**Location:** `crates/extension_api/src/extension_api.rs`
268
269- Updated `wit_bindgen::generate!` to use `./wit/since_v0.7.0`
270- Added LLM type re-exports (prefixed with `Llm` for clarity)
271- Added LLM methods to `Extension` trait with default implementations
272- Added `wit::Guest` implementations for LLM functions
273
274The default implementations ensure backward compatibility:
275```rust
276fn llm_providers(&self) -> Vec<LlmProviderInfo> {
277 Vec::new() // Extensions without LLM providers return empty
278}
279
280fn llm_provider_models(&self, _provider_id: &str) -> Result<Vec<LlmModelInfo>, String> {
281 Ok(Vec::new())
282}
283
284fn llm_stream_completion_start(...) -> Result<String, String> {
285 Err("`llm_stream_completion_start` not implemented".to_string())
286}
287fn llm_stream_completion_next(stream_id: &str) -> Result<Option<LlmCompletionEvent>, String> {
288 Err("`llm_stream_completion_next` not implemented".to_string())
289}
290fn llm_stream_completion_close(stream_id: &str) { /* cleanup */ }
291```
292
293### 8. Test Files Updated ✅
294
295Added `language_model_providers: BTreeMap::default()` to all test manifests:
296- `crates/extension/src/extension_manifest.rs` (test module)
297- `crates/extension_host/src/extension_store_test.rs`
298- `crates/extension_host/src/capability_granter.rs` (test module)
299- `crates/extension_host/benches/extension_compilation_benchmark.rs`
300
301---
302
303## What Remains To Be Done
304
305### Task 1: Test the Streaming Completion Flow (HIGH PRIORITY) - ARCHITECTURE UPDATED ✅
306
307The streaming API has been updated to use a polling-based approach instead of a resource handle pattern.
308This was necessary because the original design had a fundamental issue: the `completion-stream` resource
309was defined in an imported interface but returned from an exported function, creating ownership ambiguity.
310
311**New API:**
312- `llm-stream-completion-start` - Returns a stream ID (string)
313- `llm-stream-completion-next` - Poll for the next event using the stream ID
314- `llm-stream-completion-close` - Clean up the stream when done
315
316**Still needs testing:**
3171. Create a test extension that implements a simple LLM provider
3182. Verify the polling-based streaming works correctly through the WASM boundary
3193. Test error handling and edge cases
320
321**Location to test:** `crates/extension_host/src/wasm_host/llm_provider.rs` - the `stream_completion` method on `ExtensionLanguageModel`.
322
323### Task 2: Credential UI Prompt Support (MEDIUM PRIORITY)
324
325**Location:** `crates/extension_host/src/wasm_host/wit/since_v0_7_0.rs`
326
327The `llm_request_credential` host function currently returns `Ok(Ok(false))`:
328```rust
329async fn llm_request_credential(
330 &mut self,
331 _provider_id: String,
332 _credential_type: llm_provider::CredentialType,
333 _label: String,
334 _placeholder: String,
335) -> wasmtime::Result<Result<bool, String>> {
336 // TODO: Implement actual UI prompting
337 Ok(Ok(false))
338}
339```
340
341**What needs to happen:**
3421. Show a dialog to the user asking for the credential
3432. Wait for user input
3443. Return `true` if provided, `false` if cancelled
3454. The extension can then use `llm_store_credential` to save it
346
347This requires UI work and async coordination with gpui windows.
348
349### Task 3: Handle Model Refresh (LOW PRIORITY - can be follow-up)
350
351Currently models are only queried once at registration time. Options for improvement:
352
3531. Add a refresh mechanism that re-queries `call_llm_provider_models`
3542. Add a notification mechanism where extensions can signal that models have changed
3553. Automatic refresh on authentication
356
357**Recommendation:** Start with refresh-on-authentication as a fast-follow.
358
359### Task 4: Create a Test Extension (LOW PRIORITY - but very useful)
360
361**Note:** Creating a working test extension requires building a WASM component, which needs:
3621. The `wasm32-wasip1` Rust target: `rustup target add wasm32-wasip1`
3632. Building with: `cargo build --target wasm32-wasip1 --release`
3643. The resulting `.wasm` file must be placed in the extension directory
365
366The existing `extensions/test-extension` has a pre-built WASM file checked in. To test LLM
367provider functionality, either:
368- Rebuild the test-extension WASM with LLM provider code
369- Create a new extension and build it locally
370
371Example test extension that demonstrates the LLM provider API:
372
373```
374extensions/test-llm-provider/
375├── extension.toml
376├── Cargo.toml
377└── src/
378 └── lib.rs
379```
380
381**extension.toml:**
382```toml
383id = "test-llm-provider"
384name = "Test LLM Provider"
385version = "0.1.0"
386schema_version = 1
387
388[language_model_providers.test-provider]
389name = "Test Provider"
390```
391
392**src/lib.rs:**
393```rust
394use zed_extension_api::{self as zed, *};
395
396use std::collections::HashMap;
397use std::sync::Mutex;
398
399struct TestExtension {
400 streams: Mutex<HashMap<String, Vec<LlmCompletionEvent>>>,
401 next_stream_id: Mutex<u64>,
402}
403
404impl zed::Extension for TestExtension {
405 fn new() -> Self {
406 Self {
407 streams: Mutex::new(HashMap::new()),
408 next_stream_id: Mutex::new(0),
409 }
410 }
411
412 fn llm_providers(&self) -> Vec<LlmProviderInfo> {
413 vec![LlmProviderInfo {
414 id: "test-provider".into(),
415 name: "Test Provider".into(),
416 icon: None,
417 }]
418 }
419
420 fn llm_provider_models(&self, _provider_id: &str) -> Result<Vec<LlmModelInfo>, String> {
421 Ok(vec![LlmModelInfo {
422 id: "test-model".into(),
423 name: "Test Model".into(),
424 max_token_count: 4096,
425 max_output_tokens: Some(1024),
426 capabilities: LlmModelCapabilities {
427 supports_images: false,
428 supports_tools: false,
429 supports_tool_choice_auto: false,
430 supports_tool_choice_any: false,
431 supports_tool_choice_none: false,
432 supports_thinking: false,
433 tool_input_format: LlmToolInputFormat::JsonSchema,
434 },
435 is_default: true,
436 is_default_fast: true,
437 }])
438 }
439
440 fn llm_stream_completion_start(
441 &mut self,
442 _provider_id: &str,
443 _model_id: &str,
444 _request: &LlmCompletionRequest,
445 ) -> Result<String, String> {
446 // Create a simple response with test events
447 let events = vec![
448 LlmCompletionEvent::Started,
449 LlmCompletionEvent::Text("Hello, ".into()),
450 LlmCompletionEvent::Text("world!".into()),
451 LlmCompletionEvent::Stop(LlmStopReason::EndTurn),
452 ];
453
454 let mut id = self.next_stream_id.lock().unwrap();
455 let stream_id = format!("stream-{}", *id);
456 *id += 1;
457
458 self.streams.lock().unwrap().insert(stream_id.clone(), events);
459 Ok(stream_id)
460 }
461
462 fn llm_stream_completion_next(
463 &mut self,
464 stream_id: &str,
465 ) -> Result<Option<LlmCompletionEvent>, String> {
466 let mut streams = self.streams.lock().unwrap();
467 if let Some(events) = streams.get_mut(stream_id) {
468 if events.is_empty() {
469 Ok(None)
470 } else {
471 Ok(Some(events.remove(0)))
472 }
473 } else {
474 Err(format!("Unknown stream: {}", stream_id))
475 }
476 }
477
478 fn llm_stream_completion_close(&mut self, stream_id: &str) {
479 self.streams.lock().unwrap().remove(stream_id);
480 }
481}
482
483zed::register_extension!(TestExtension);
484```
485
486---
487
488## File-by-File Checklist
489
490### Completed ✅
491
492- [x] `crates/extension_api/wit/since_v0.7.0/llm-provider.wit` - LLM types defined
493- [x] `crates/extension_api/wit/since_v0.7.0/extension.wit` - LLM exports/imports added
494- [x] `crates/extension_api/src/extension_api.rs` - Extension trait + Guest impl updated for v0.7.0
495- [x] `crates/extension/src/extension_manifest.rs` - Manifest types added
496- [x] `crates/extension/src/extension_host_proxy.rs` - Proxy trait added
497- [x] `crates/extension_host/src/wasm_host/llm_provider.rs` - Provider/Model structs created
498- [x] `crates/extension_host/src/wasm_host/wit.rs` - LLM types exported, Extension enum updated
499- [x] `crates/extension_host/src/wasm_host/wit/since_v0_7_0.rs` - Host trait implementations
500- [x] `crates/extension_host/src/wasm_host/wit/since_v0_6_0.rs` - Rewritten to use latest types
501- [x] `crates/extension_host/src/extension_host.rs` - Wired up LLM provider registration/unregistration
502- [x] `crates/extension_host/Cargo.toml` - Dependencies added
503- [x] `crates/language_models/src/extension.rs` - Proxy implementation
504- [x] `crates/language_models/src/language_models.rs` - Proxy registration
505- [x] `crates/language_models/Cargo.toml` - Extension dependency added
506
507### Should Implement (Follow-up PRs)
508
509- [ ] `llm_request_credential` UI implementation
510- [ ] Model refresh mechanism
511- [ ] Test extension for validation
512- [ ] Documentation for extension authors
513
514---
515
516## Architecture Overview
517
518```
519┌─────────────────────────────────────────────────────────────────────┐
520│ Extension Host │
521│ ┌─────────────────────────────────────────────────────────────┐ │
522│ │ extensions_updated() │ │
523│ │ │ │
524│ │ 1. Load WasmExtension │ │
525│ │ 2. Query llm_providers() and llm_provider_models() │ │
526│ │ 3. Call proxy.register_language_model_provider() │ │
527│ └───────────────────────────┬───────────────────────────────────┘ │
528│ │ │
529│ ┌───────────────────────────▼───────────────────────────────────┐ │
530│ │ ExtensionLanguageModelProvider │ │
531│ │ - Wraps WasmExtension │ │
532│ │ - Implements LanguageModelProvider │ │
533│ │ - Creates ExtensionLanguageModel instances │ │
534│ └───────────────────────────┬───────────────────────────────────┘ │
535│ │ │
536│ ┌───────────────────────────▼───────────────────────────────────┐ │
537│ │ ExtensionLanguageModel │ │
538│ │ - Implements LanguageModel │ │
539│ │ - stream_completion() calls extension via WASM │ │
540│ └───────────────────────────────────────────────────────────────┘ │
541└─────────────────────────────────────────────────────────────────────┘
542 │
543 │ Proxy (boxed closure)
544 ▼
545┌─────────────────────────────────────────────────────────────────────┐
546│ Language Models Crate │
547│ ┌───────────────────────────────────────────────────────────────┐ │
548│ │ ExtensionLanguageModelProxy │ │
549│ │ - Implements ExtensionLanguageModelProviderProxy │ │
550│ │ - Calls register_fn closure │ │
551│ │ - Unregisters from LanguageModelRegistry │ │
552│ └───────────────────────────┬───────────────────────────────────┘ │
553│ │ │
554│ ┌───────────────────────────▼───────────────────────────────────┐ │
555│ │ LanguageModelRegistry │ │
556│ │ - Stores all providers (built-in + extension) │ │
557│ │ - Provides models to UI │ │
558│ └───────────────────────────────────────────────────────────────┘ │
559└─────────────────────────────────────────────────────────────────────┘
560```
561
562---
563
564## Key Code Patterns
565
566### 1. Provider ID Format
567
568Provider IDs are formatted as `{extension_id}:{provider_id}` to ensure uniqueness:
569
570```rust
571let provider_id: Arc<str> = format!("{}:{}", manifest.id, provider_info.id).into();
572```
573
574### 2. Triple-Nested Result Handling
575
576When calling extension methods, results are nested:
577- Outer `Result`: from channel operations (anyhow error)
578- Middle `Result`: from WASM call (anyhow error)
579- Inner `Result<T, String>`: from extension logic
580
581```rust
582let models_result = wasm_extension.call(...).await;
583
584let models: Vec<LlmModelInfo> = match models_result {
585 Ok(Ok(Ok(models))) => models,
586 Ok(Ok(Err(e))) => { /* extension returned error */ }
587 Ok(Err(e)) => { /* WASM call failed */ }
588 Err(e) => { /* channel operation failed */ }
589};
590```
591
592### 3. Polling-Based Streaming Pattern
593
594The streaming API uses explicit stream IDs with polling instead of resource handles:
595
596```rust
597// Start the stream and get an ID
598let stream_id = ext.call_llm_stream_completion_start(store, provider_id, model_id, request).await?;
599
600// Poll for events in a loop
601loop {
602 match ext.call_llm_stream_completion_next(store, &stream_id).await? {
603 Ok(Some(event)) => { /* process event */ }
604 Ok(None) => break, // Stream complete
605 Err(e) => { /* handle error */ }
606 }
607}
608
609// Clean up
610ext.call_llm_stream_completion_close(store, &stream_id).await;
611```
612
613This pattern avoids the complexity of cross-boundary resource ownership in the WASM component model.
614
615### 4. Default Trait Implementations
616
617All LLM methods in the `Extension` trait have defaults so existing extensions continue to work:
618
619```rust
620fn llm_providers(&self) -> Vec<LlmProviderInfo> {
621 Vec::new() // No providers by default
622}
623```
624
625---
626
627## Common Pitfalls
628
6291. **Type confusion:** WIT bindgen creates NEW types for each version. `Completion` from v0.6.0 bindgen is different from v0.7.0. This is why we map older interfaces to `latest::`.
630
6312. **Import paths:** After `pub use self::zed::extension::*;`, types are available without prefix. Types in sub-interfaces (like `lsp::CompletionKind`) need explicit imports.
632
6333. **Async closures:** Extension calls use `extension.call(|ext, store| async move { ... }.boxed())` pattern. The closure must be `'static + Send`.
634
6354. **Stream ID management:** Extensions must track their active streams using the stream IDs returned from `llm_stream_completion_start`. The host will call `llm_stream_completion_close` when done.
636
6375. **Result nesting:** `extension.call(...)` wraps the closure's return type in `Result<T>`, so if the closure returns `Result<Result<X, String>>`, you get `Result<Result<Result<X, String>>>`. Unwrap carefully!
638
6396. **Proxy type boundaries:** The `extension` crate shouldn't depend on `extension_host`. The proxy trait uses a boxed closure to pass the registration logic without needing to share types.
640
6417. **Resource ownership in WIT:** Be careful when defining resources in imported interfaces but returning them from exported functions. This creates ownership ambiguity. The streaming API was changed to use polling to avoid this issue.
642
643---
644
645## Testing
646
647All existing tests pass:
648```bash
649cargo test -p extension_host --lib
650# 3 tests pass
651
652./script/clippy
653# No warnings
654```
655
656To test the full flow manually:
6571. Create a test extension with LLM provider
6582. Build and install it
6593. Check if it appears in the model selector
6604. Try making a completion request
661
662---
663
664## Relevant Files for Reference
665
666### How providers are registered
667- `crates/language_model/src/registry.rs` - `LanguageModelRegistry::register_provider`
668
669### How other extension proxies work
670- `crates/extension/src/extension_host_proxy.rs` - the proxy pattern
671- `crates/project/src/context_server_store/extension.rs` - context server proxy implementation
672
673### How extensions are loaded
674- `crates/extension_host/src/extension_host.rs` - `extensions_updated` method
675
676### WasmExtension call pattern
677- `crates/extension_host/src/wasm_host.rs` - `WasmExtension::call` method
678
679---
680
681## Questions for Follow-up
682
6831. **Where should configuration UI live?** The current implementation uses an empty config view. Should extension providers have configurable settings?
684
6852. **How to handle extension reload?** Currently, in-flight completions will fail if the extension is unloaded. Should we add graceful handling?
686
6873. **Should there be rate limiting?** If an extension's provider misbehaves, should Zed throttle or disable it?
688
6894. **Icon support:** The `provider_info.icon` field exists but `icon()` on the provider returns `ui::IconName::ZedAssistant`. Should we add custom icon support?