1---
2version: 2
3interactions:
4- id: 0
5 request:
6 proto: HTTP/1.1
7 proto_major: 1
8 proto_minor: 1
9 content_length: 462
10 host: ""
11 body: '{"messages":[{"content":"You are a helpful assistant","role":"system"},{"content":"What''s the weather in Florence, Italy?","role":"user"}],"model":"grok-3-mini","reasoning_effort":"high","tool_choice":"auto","tools":[{"function":{"name":"weather","strict":false,"description":"Get weather information for a location","parameters":{"properties":{"location":{"description":"the city","type":"string"}},"required":["location"],"type":"object"}},"type":"function"}]}'
12 headers:
13 Accept:
14 - application/json
15 Content-Type:
16 - application/json
17 User-Agent:
18 - OpenAI/Go 2.7.1
19 url: https://fantasy-playground-resource.openai.azure.com/openai/v1/chat/completions
20 method: POST
21 response:
22 proto: HTTP/2.0
23 proto_major: 2
24 proto_minor: 0
25 content_length: -1
26 uncompressed: true
27 body: |
28 {"choices":[{"content_filter_results":{},"finish_reason":"tool_calls","index":0,"logprobs":null,"message":{"content":"","reasoning_content":"First, the user is asking about the weather in Florence, Italy. I have an available function called \"weather\" that can get weather information for a location.\n\nThe function description is: \"Get weather information for a location\". The parameters require a \"location\" which is a string describing the city.\n\nIn this query, the user has provided \"Florence, Italy\", which clearly specifies the city and country. This should be sufficient for the \"location\" parameter.\n\nAll required parameters are provided and inferable. The location is given as \"Florence, Italy\", so I can use that directly.\n\nI should call the function if it can advance the user's request. Here, calling the \"weather\" function with the location will directly provide the weather information, fulfilling the user's query.\n\nThe response format for function calls is specific: I need to use \u003cfunction_call\u003e tags with JSON format, like \u003cfunction_call\u003e{\"action\": \"function_name\", \"action_input\": {\"parameter\": \"argument\"}}\u003c/function_call\u003e.\n\nSo, for this, the action is \"weather\", and the action_input should have \"location\" set to \"Florence, Italy\".\n\nAfter calling the function, in a multi-turn conversation, I might need to handle the response, but the system prompt says to continue calling functions until the request is fulfilled. However, once I call this function, it should return the weather data, and I can then respond with that information. But the function call itself will handle retrieving the data.\n\nThe prompt says: \"You can call functions to retrieve information or perform actions.\" And \"trigger a function call in JSON format within \u003cfunction_call\u003e\u003c/function_call\u003e tags.\"\n\nI should not make my response verbose; keep it clear.\n\nSince this is the logical next step, I should just make the function call without additional text, unless needed. The prompt says: \"Keep your response to user clear; please do not make your response verbose!\" And \"If there is anything unclear, ask the user for clarification.\"\n\nIn this case, everything is clear, so no need to ask for clarification.\n\nMy response should only contain the function call in the specified format.\n\nFinally, format it as: \u003cfunction_call\u003e{\"action\": \"weather\", \"action_input\": {\"location\": \"Florence, Italy\"}}\u003c/function_call\u003e\n\nThat seems correct.","refusal":null,"role":"assistant","tool_calls":[{"function":{"arguments":"{\"location\":\"Florence, Italy\"}","call_id":null,"name":"weather"},"id":"call_78100819","type":"function"}]},"stop_reason":null}],"created":1761843020,"id":"26c2d70b-1783-4e27-bd1c-20e7eea8d924_sidecar","model":"grok-3-mini-high","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"filtered":false,"detected":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"usage":{"audio_prompt_tokens":0,"completion_tokens":27,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":464,"rejected_prediction_tokens":0},"num_sources_used":0,"prompt_tokens":288,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0,"image_tokens":0,"text_tokens":288},"total_tokens":779}}
29 headers:
30 Content-Type:
31 - application/json
32 status: 200 OK
33 code: 200
34 duration: 3.311565s
35- id: 1
36 request:
37 proto: HTTP/1.1
38 proto_major: 1
39 proto_minor: 1
40 content_length: 682
41 host: ""
42 body: '{"messages":[{"content":"You are a helpful assistant","role":"system"},{"content":"What''s the weather in Florence, Italy?","role":"user"},{"tool_calls":[{"id":"call_78100819","function":{"arguments":"{\"location\":\"Florence, Italy\"}","name":"weather"},"type":"function"}],"role":"assistant"},{"content":"40 C","tool_call_id":"call_78100819","role":"tool"}],"model":"grok-3-mini","reasoning_effort":"high","tool_choice":"auto","tools":[{"function":{"name":"weather","strict":false,"description":"Get weather information for a location","parameters":{"properties":{"location":{"description":"the city","type":"string"}},"required":["location"],"type":"object"}},"type":"function"}]}'
43 headers:
44 Accept:
45 - application/json
46 Content-Type:
47 - application/json
48 User-Agent:
49 - OpenAI/Go 2.7.1
50 url: https://fantasy-playground-resource.openai.azure.com/openai/v1/chat/completions
51 method: POST
52 response:
53 proto: HTTP/2.0
54 proto_major: 2
55 proto_minor: 0
56 content_length: -1
57 uncompressed: true
58 body: |
59 {"choices":[{"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"protected_material_code":{"filtered":false,"detected":false},"protected_material_text":{"filtered":false,"detected":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"finish_reason":"stop","index":0,"logprobs":null,"message":{"content":"The weather in Florence, Italy is 40°C.","reasoning_content":"First, the user asked: \"What's the weather in Florence, Italy?\" This directly matches the available function \"weather\", which requires a \"location\" parameter.\n\nI have already decided to call the function in my initial response, as shown in the conversation history. Now, the function call was made, and the response from the function is \"40 C\". I need to handle this in my response.\n\nAs a helpful assistant, after a function call, I should provide the information to the user in a clear and non-verbose way. The function call returned \"40 C\", which I assume means 40 degrees Celsius. I should present this information directly.\n\nThe system prompt says: \"Keep your response to user clear; please do not make your response verbose!\" So, I need to be concise.\n\nAlso, the prompt indicates that in multi-turn conversations, I should continue calling relevant functions if needed, but in this case, the function has been called and I have the result, so I can now respond with the information.\n\nThere's no need for another function call here because the query is fulfilled with the weather information.\n\nMy response should simply state the weather based on the function's output.\n\nFinal response structure: Since the function call has been executed and I have the result, I should output the weather information clearly.","refusal":null,"role":"assistant","tool_calls":null},"stop_reason":null}],"created":1761843024,"id":"3b2bc93e-a7e0-4852-924b-9482b44ec310_sidecar","model":"grok-3-mini-high","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"filtered":false,"detected":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"usage":{"audio_prompt_tokens":0,"completion_tokens":11,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":262,"rejected_prediction_tokens":0},"num_sources_used":0,"prompt_tokens":332,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0,"image_tokens":0,"text_tokens":332},"total_tokens":605}}
60 headers:
61 Content-Type:
62 - application/json
63 status: 200 OK
64 code: 200
65 duration: 1.801076291s