open_ai: Send `prompt_cache_key` to improve caching (#36065)

Oleksiy Syvokon and Michael Sloan created

Release Notes:

- N/A

Co-authored-by: Michael Sloan <mgsloan@gmail.com>

Change summary

crates/language_models/src/provider/open_ai.rs | 1 +
crates/open_ai/src/open_ai.rs                  | 2 ++
2 files changed, 3 insertions(+)

Detailed changes

crates/open_ai/src/open_ai.rs 🔗

@@ -244,6 +244,8 @@ pub struct Request {
     pub parallel_tool_calls: Option<bool>,
     #[serde(default, skip_serializing_if = "Vec::is_empty")]
     pub tools: Vec<ToolDefinition>,
+    #[serde(default, skip_serializing_if = "Option::is_none")]
+    pub prompt_cache_key: Option<String>,
 }
 
 #[derive(Debug, Serialize, Deserialize)]