Thought Signatures Implementation for Gemini 3 Models
Problem Statement
Gemini 3 models (like gemini-3-pro-preview) fail when using tool calls through OpenRouter and Copilot with the error:
Unable to submit request because function call `default_api:list_directory` in the 2. content block is missing a `thought_signature`.
The error occurs AFTER the first tool call is executed and we send back the tool results with conversation history.
Background
What are Thought Signatures?
Thought signatures are a validation mechanism used by Gemini reasoning models. When the model performs "thinking" (reasoning) before making a tool call, it generates a cryptographic signature of that reasoning. This signature must be preserved and sent back in subsequent requests to maintain the integrity of the conversation flow.
API Formats Involved
There are three different API formats in play:
- Google AI Native API - Uses
Partobjects includingFunctionCallPartwith athought_signaturefield - OpenRouter/Copilot Chat Completions API - OpenAI-compatible format with
tool_callsarray - Copilot Responses API - A separate format with streaming
reasoning_details
Current Architecture
Data Flow
- Model Response → Contains tool calls with reasoning
- Zed Event Stream → Emits
LanguageModelCompletionEvent::ToolUseevents - Agent → Collects events and constructs
LanguageModelRequestMessageobjects - Provider → Converts messages back to provider-specific format
- API Request → Sent back to the provider with conversation history
Key Data Structures
// Core message structure
pub struct LanguageModelRequestMessage {
pub role: Role,
pub content: Vec<MessageContent>,
pub cache: bool,
pub reasoning_details: Option<serde_json::Value>, // Added for thought signatures
}
// Tool use structure
pub struct LanguageModelToolUse {
pub id: LanguageModelToolUseId,
pub name: Arc<str>,
pub raw_input: String,
pub input: serde_json::Value,
pub is_input_complete: bool,
pub thought_signature: Option<String>, // NOT USED - wrong approach
}
What We Tried (That Didn't Work)
Attempt 1: thought_signature as field on ToolCall
We added thought_signature as a field on the ToolCall structure itself.
Result: 400 Bad Request - OpenRouter/Copilot don't support this field at the ToolCall level.
Attempt 2: thought_signature inside function object
We moved thought_signature inside the function object of the tool call.
{
"function": {
"name": "...",
"arguments": "...",
"thought_signature": "..."
}
}
Result: 400 Bad Request - Still rejected.
Attempt 3: Using camelCase thoughtSignature
Tried both snake_case and camelCase variants.
Result: No difference, still rejected.
The Correct Approach (From OpenRouter Documentation)
According to OpenRouter's documentation:
Key Insight: reasoning_details is a message-level array
The thought signature is NOT a property of individual tool calls. Instead, it's part of a reasoning_details array that belongs to the entire assistant message:
{
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_123",
"type": "function",
"function": {
"name": "list_directory",
"arguments": "{...}"
}
}
],
"reasoning_details": [
{
"type": "reasoning.text",
"text": "Let me think through this step by step...",
"signature": "sha256:abc123...",
"id": "reasoning-text-1",
"format": "anthropic-claude-v1",
"index": 0
}
]
}
reasoning_details Structure
The array can contain three types of objects:
- reasoning.summary - High-level summary of reasoning
- reasoning.encrypted - Encrypted/redacted reasoning data
- reasoning.text - Raw text reasoning with optional signature
Each object has:
type: One of the three types aboveid: Unique identifierformat: Format version (e.g., "anthropic-claude-v1", "openai-responses-v1")index: Sequential indexsignature: (for reasoning.text) The cryptographic signature we need to preserve
What We've Implemented So Far
1. Added reasoning_details field to core structures
✅ LanguageModelRequestMessage now has reasoning_details: Option<serde_json::Value>
2. Added reasoning_details to OpenRouter structs
✅ RequestMessage::Assistant has reasoning_details field
✅ ResponseMessageDelta has reasoning_details field
3. Updated into_open_router to send reasoning_details
✅ When building requests, we now attach reasoning_details from the message to the Assistant message
4. Added mapper to capture reasoning_details from responses
✅ OpenRouterEventMapper now has a reasoning_details field
✅ We capture it from choice.delta.reasoning_details
5. Added debugging
✅ eprintln! statements in both OpenRouter and Copilot to log requests and responses
What's Still Missing
The Critical Gap: Event → Message Flow
The problem is in how events become messages. Our current flow:
- ✅ We capture
reasoning_detailsfrom the API response - ❌ We store it in
OpenRouterEventMapperbut never emit it - ❌ The agent constructs messages from events, but has no way to get the
reasoning_details - ❌ When sending the next request,
message.reasoning_detailsisNone
What We Need to Do
Option A: Add a new event type
Add a LanguageModelCompletionEvent::ReasoningDetails(serde_json::Value) event that gets emitted when we receive reasoning details. The agent would need to:
- Collect this event along with tool use events
- When constructing the assistant message, attach the reasoning_details to it
Option B: Store reasoning_details with tool use events
Modify the flow so that when we emit tool use events, we somehow associate the reasoning_details with them. This is tricky because:
reasoning_detailsis per-message, not per-tool- Multiple tools can be in one message
- We emit events one at a time
Option C: Store at a higher level
Have the agent or provider layer handle this separately from the event stream. For example:
- The provider keeps track of reasoning_details for messages it processes
- When building the next request, it looks up the reasoning_details for assistant messages that had tool calls
Current Status
What Works
- ✅ Code compiles
- ✅
reasoning_detailsfield exists throughout the stack - ✅ We capture
reasoning_detailsfrom responses - ✅ We send
reasoning_detailsin requests (if present)
What Doesn't Work
- ❌
reasoning_detailsnever makes it from the response to the request - ❌ The error still occurs because we're sending
nullforreasoning_details
Evidence from Error Message
The error says:
function call `default_api:list_directory` in the 2. content block is missing a `thought_signature`
This means:
- We're successfully making the first request (works)
- The model responds with tool calls including reasoning_details (works)
- We execute the tools (works)
- We send back the conversation history (works)
- BUT the assistant message in that history is missing the reasoning_details (broken)
- Google/Vertex validates the message and rejects it (error)
Next Steps
- Choose an approach - Decide between Option A, B, or C above
- Implement the data flow - Ensure
reasoning_detailsflows from response → events → message → request - Test with debugging - Use the
eprintln!statements to verify:- That we receive
reasoning_detailsin the response - That we include it in the next request
- That we receive
- Apply to Copilot - Once working for OpenRouter, apply the same pattern to Copilot
- Handle edge cases:
- What if there are multiple tool calls in one message?
- What if reasoning_details is empty/null?
- What about other providers (Anthropic, etc.)?
Files Modified
crates/language_model/src/request.rs- Addedreasoning_detailstoLanguageModelRequestMessagecrates/open_router/src/open_router.rs- Addedreasoning_detailsto request/response structscrates/language_models/src/provider/open_router.rs- Added capture and send logiccrates/copilot/src/copilot_responses.rs- Already hadthought_signaturesupport- Various test files - Added
reasoning_details: Noneto fix compilation
SOLUTION: Copilot Chat Completions API Implementation
Discovery: Gemini 3 Uses Chat Completions API, Not Responses API
Initial plan assumed routing Gemini 3 to Responses API would work, but testing revealed:
- Gemini 3 models do NOT support the Responses API through Copilot
- Error:
{"error":{"message":"model gemini-3-pro-preview is not supported via Responses API.","code":"unsupported_api_for_model"}} - Gemini 3 ONLY supports the Chat Completions API
Key Finding: reasoning_opaque Location in JSON
Through detailed logging and JSON inspection, discovered Copilot sends thought signatures in Chat Completions API:
- Field name:
reasoning_opaque(notthought_signature) - Location: At the
deltalevel, NOT at thetool_callslevel!
JSON structure from Copilot response:
{
"choices": [{
"delta": {
"role": "assistant",
"tool_calls": [{
"function": {"arguments": "...", "name": "list_directory"},
"id": "call_...",
"index": 0,
"type": "function"
}],
"reasoning_opaque": "sPsUMpfe1YZXLkbc0TNW/mJLT..." // <-- HERE!
}
}]
}
Implementation Status
✅ Completed Changes
-
Added
reasoning_opaquefield toResponseDelta(crates/copilot/src/copilot_chat.rs)pub struct ResponseDelta { pub content: Option<String>, pub role: Option<Role>, pub tool_calls: Vec<ToolCallChunk>, pub reasoning_opaque: Option<String>, // Added this } -
Added
thought_signaturefields to Chat Completions structures (crates/copilot/src/copilot_chat.rs)FunctionContentnow hasthought_signature: Option<String>FunctionChunknow hasthought_signature: Option<String>
-
Updated mapper to capture
reasoning_opaquefrom delta (crates/language_models/src/provider/copilot_chat.rs)- Captures
reasoning_opaquefromdelta.reasoning_opaque - Applies it to all tool calls in that delta
- Stores in
thought_signaturefield of accumulated tool call
- Captures
-
Verified thought signature is being sent back
- Logs show:
📤 Chat Completions: Sending tool call list_directory with thought_signature: Some("sPsUMpfe...") - Signature is being included in subsequent requests
- Logs show:
❌ Current Issue: Still Getting 400 Error
Despite successfully capturing and sending back the thought signature, Copilot still returns:
400 Bad Request {"error":{"message":"invalid request body","code":"invalid_request_body"}}
This happens on the SECOND request (after tool execution), when sending conversation history back.
Debug Logging Added
Current logging shows the full flow:
📥 Chat Completions: Received reasoning_opaque (length: XXX)- Successfully captured🔍 Tool call chunk: index=..., id=..., has_function=...- Delta processing📤 Chat Completions: Emitting ToolUse for ... with thought_signature: Some(...)- Event emission📤 Chat Completions: Sending tool call ... with thought_signature: Some(...)- Sending back📤 Chat Completions Request JSON: {...}- Full request being sent📥 Chat Completions Response Event: {...}- Full response received
Potential Issues to Investigate
-
Field name mismatch on send: We're sending
thought_signaturebut should we sendreasoning_opaque?- We added
thought_signaturetoFunctionContent - But Copilot might expect
reasoning_opaquein the request just like it sends it
- We added
-
Serialization issue: Check if serde is properly serializing the field
- Added
#[serde(skip_serializing_if = "Option::is_none")]- might be skipping it? - Should verify field appears in actual JSON being sent
- Added
-
Location issue: Even when sending back, should
reasoning_opaquebe at delta level?- Currently putting it in
function.thought_signature - Might need to be at a different level in the request structure
- Currently putting it in
-
Format validation: The signature is a base64-encoded string ~1464 characters
- Copilot might be validating the signature format/content
- Could be rejecting it if it's malformed or in wrong structure
Next Steps to Debug
-
Check actual JSON being sent: Look at the
📤 Chat Completions Request JSONlogs- Search for
thought_signaturein the JSON - Verify it's actually in the serialized output (not skipped)
- Check its exact location in the JSON structure
- Search for
-
Try renaming field: Change
thought_signaturetoreasoning_opaquein request structures- In
FunctionContentstruct - In
FunctionChunkstruct - See if Copilot expects same field name in both directions
- In
-
Compare request format to response format:
- Response has
reasoning_opaqueat delta level - Request might need it at function level OR delta level
- May need to restructure where we put it
- Response has
-
Test with tool choice parameter: Some APIs are sensitive to request structure
- Try with/without
tool_choiceparameter - Try with minimal conversation history
- Try with/without
-
Check Copilot API documentation:
- Search for official docs on
reasoning_opaquehandling - Look for examples of tool calls with reasoning/thinking in Copilot API
- Search for official docs on
Files Modified
- ✅
crates/copilot/src/copilot_chat.rs- Addedreasoning_opaquetoResponseDelta,thought_signatureto function structs - ✅
crates/language_models/src/provider/copilot_chat.rs- Capture and send logic with debug logging - ⏳ Still need to verify serialization and field naming
References
- OpenRouter Reasoning Tokens Documentation
- Google Thought Signatures Documentation
- Original Issue #43024
✅ FINAL FIX (2025-01-21)
The Critical Issues Found
After testing, we discovered TWO problems:
- Wrong Location: We were sending
thought_signatureinside thefunctionobject, but Copilot expectsreasoning_opaqueat the message level - Wrong Content Format: We were sending
"content": [](empty array), but Copilot expects"content": nullwhen there are tool calls
The Solution
Issue 1: Message-Level Field
- Added
reasoning_opaque: Option<String>toChatMessage::Assistant - Removed
thought_signaturefromFunctionContent(it doesn't belong there) - Updated request builder to collect signature from first tool use and pass at message level
Issue 2: Null vs Empty Array
- Changed
contentfield type fromChatMessageContenttoOption<ChatMessageContent> - Set
content: Nonewhen we have tool calls and no text (serializes tonull) - Set
content: Some(text)when we have text content
Correct Request Format
{
"role": "assistant",
"content": null, // ✅ Explicit null, not []
"tool_calls": [{
"id": "call_...",
"type": "function",
"function": {
"name": "list_directory",
"arguments": "{\"path\":\"deleteme\"}"
// NO thought_signature here!
}
}],
"reasoning_opaque": "XLn4be0..." // ✅ At message level!
}
Files Modified in Final Fix
zed/crates/copilot/src/copilot_chat.rs:- Added
reasoning_opaquetoChatMessage::Assistant - Changed
contenttoOption<ChatMessageContent> - Fixed vision detection pattern match
- Added
zed/crates/language_models/src/provider/copilot_chat.rs:- Collect
reasoning_opaquefrom first tool use - Pass to Assistant message, not function
- Set
content: Nonefor tool-only messages - Removed function-level thought_signature handling
- Collect
Compilation Status
✅ All packages compile successfully
Ready for testing!