1---
2version: 2
3interactions:
4- id: 0
5 request:
6 proto: HTTP/1.1
7 proto_major: 1
8 proto_minor: 1
9 content_length: 462
10 host: ""
11 body: '{"messages":[{"content":"You are a helpful assistant","role":"system"},{"content":"What''s the weather in Florence, Italy?","role":"user"}],"model":"grok-3-mini","reasoning_effort":"high","tool_choice":"auto","tools":[{"function":{"name":"weather","strict":false,"description":"Get weather information for a location","parameters":{"properties":{"location":{"description":"the city","type":"string"}},"required":["location"],"type":"object"}},"type":"function"}]}'
12 headers:
13 Accept:
14 - application/json
15 Content-Type:
16 - application/json
17 User-Agent:
18 - OpenAI/Go 2.7.1
19 url: https://api.x.ai/v1/chat/completions
20 method: POST
21 response:
22 proto: HTTP/2.0
23 proto_major: 2
24 proto_minor: 0
25 content_length: -1
26 uncompressed: true
27 body: '{"id":"4d46c468-07c9-43d6-ec35-526a4c35aaa2","object":"chat.completion","created":1762855019,"model":"grok-3-mini-high","choices":[{"index":0,"message":{"role":"assistant","content":"","reasoning_content":"First, the user is asking for the weather in Florence, Italy. I have an available function called \"weather\" that can get weather information for a location. The function requires a \"location\" parameter, which is a string describing the city.\n\nThe location provided in the query is \"Florence, Italy\". This seems clear and matches the required parameter. I can infer that \"Florence, Italy\" is the city name, so I don''t need more information.\n\nThe function description says the location is \"the city\", so passing \"Florence, Italy\" should work as it specifies the city and country, which might help in accuracy.\n\nI should call the function if it can address the query. Here, it directly can, so I need to use the <function_call> tag in JSON format.\n\nThe format should be: <function_call>{\"action\": \"weather\", \"action_input\": {\"location\": \"Florence, Italy\"}}</function_call>\n\nAfter calling the function, I might need to handle the response in subsequent turns, but for now, this is the logical next step.\n\nMy response should not be verbose; keep it clear. Since I''m calling a function, I shouldn''t add extra text unless necessary. The instruction says to use the <function_call> tag only when deciding to call a function, and not to add other text if not needed. But in this case, since the response is just the function call, I can output that directly.\n\nFinally, confirm: no missing parameters, location is provided. Function matches the query.","tool_calls":[{"id":"call_67567016","function":{"name":"weather","arguments":"{\"location\":\"Florence, Italy\"}"},"type":"function"}],"refusal":null},"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":288,"completion_tokens":27,"total_tokens":621,"prompt_tokens_details":{"text_tokens":288,"audio_tokens":0,"image_tokens":0,"cached_tokens":5},"completion_tokens_details":{"reasoning_tokens":306,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0},"num_sources_used":0},"system_fingerprint":"fp_a6fbae1a97"}'
28 headers:
29 Content-Type:
30 - application/json
31 status: 200 OK
32 code: 200
33 duration: 3.652396875s
34- id: 1
35 request:
36 proto: HTTP/1.1
37 proto_major: 1
38 proto_minor: 1
39 content_length: 2222
40 host: ""
41 body: '{"messages":[{"content":"You are a helpful assistant","role":"system"},{"content":"What''s the weather in Florence, Italy?","role":"user"},{"tool_calls":[{"id":"call_67567016","function":{"arguments":"{\"location\":\"Florence, Italy\"}","name":"weather"},"type":"function"}],"role":"assistant","reasoning_content":"First, the user is asking for the weather in Florence, Italy. I have an available function called \"weather\" that can get weather information for a location. The function requires a \"location\" parameter, which is a string describing the city.\n\nThe location provided in the query is \"Florence, Italy\". This seems clear and matches the required parameter. I can infer that \"Florence, Italy\" is the city name, so I don''t need more information.\n\nThe function description says the location is \"the city\", so passing \"Florence, Italy\" should work as it specifies the city and country, which might help in accuracy.\n\nI should call the function if it can address the query. Here, it directly can, so I need to use the \u003cfunction_call\u003e tag in JSON format.\n\nThe format should be: \u003cfunction_call\u003e{\"action\": \"weather\", \"action_input\": {\"location\": \"Florence, Italy\"}}\u003c/function_call\u003e\n\nAfter calling the function, I might need to handle the response in subsequent turns, but for now, this is the logical next step.\n\nMy response should not be verbose; keep it clear. Since I''m calling a function, I shouldn''t add extra text unless necessary. The instruction says to use the \u003cfunction_call\u003e tag only when deciding to call a function, and not to add other text if not needed. But in this case, since the response is just the function call, I can output that directly.\n\nFinally, confirm: no missing parameters, location is provided. Function matches the query."},{"content":"40 C","tool_call_id":"call_67567016","role":"tool"}],"model":"grok-3-mini","reasoning_effort":"high","tool_choice":"auto","tools":[{"function":{"name":"weather","strict":false,"description":"Get weather information for a location","parameters":{"properties":{"location":{"description":"the city","type":"string"}},"required":["location"],"type":"object"}},"type":"function"}]}'
42 headers:
43 Accept:
44 - application/json
45 Content-Type:
46 - application/json
47 User-Agent:
48 - OpenAI/Go 2.7.1
49 url: https://api.x.ai/v1/chat/completions
50 method: POST
51 response:
52 proto: HTTP/2.0
53 proto_major: 2
54 proto_minor: 0
55 content_length: -1
56 uncompressed: true
57 body: '{"id":"8f7e3075-4206-df84-6f7a-689e2b8deb0f","object":"chat.completion","created":1762855023,"model":"grok-3-mini-high","choices":[{"index":0,"message":{"role":"assistant","content":"The weather in Florence, Italy is 40°C.","reasoning_content":"First, the user asked: \"What''s the weather in Florence, Italy?\" This directly matches the available function \"weather\", which retrieves weather information for a location.\n\nThe function call I made was: <function_call>{\"action\": \"weather\", \"action_input\": {\"location\": \"Florence, Italy\"}}</function_call>. That was correct because the location is provided and inferable.\n\nNow, the response from the function is: \"40 C\". This seems to be the weather information, likely meaning 40 degrees Celsius.\n\nMy role is to be a helpful assistant. After calling a function, I need to provide a clear response to the user based on the function''s output. The instructions say: \"Keep your response to user clear; please do not make your response verbose!\" So, I should be concise.\n\nThe conversation is multi-turn, but in this case, the function call has been executed, and I have the result. I should now present this information to the user.\n\nThe function output is just \"40 C\", which is minimal. I can infer this means the temperature is 40 degrees Celsius, but I should confirm if there''s more to it. However, the function description is \"Get weather information\", and it returned \"40 C\", so I''ll assume that''s the key information provided.\n\nI need to decide if I should call another function or respond. The user''s request is fulfilled with this weather information, so no need for further function calls.\n\nResponse structure: I should not use <function_call> tags unless I''m calling a function. Now, I''m responding with the result.\n\nClear and concise response: \"The weather in Florence, Italy is 40 degrees Celsius.\"\n\nTo make it even better, I could say: \"The current weather in Florence, Italy is 40°C.\"\n\nBut the function output is \"40 C\", so I''ll stick close to that.\n\nFinally, ensure I''m not verbose.","refusal":null},"finish_reason":"stop"}],"usage":{"prompt_tokens":331,"completion_tokens":11,"total_tokens":722,"prompt_tokens_details":{"text_tokens":331,"audio_tokens":0,"image_tokens":0,"cached_tokens":283},"completion_tokens_details":{"reasoning_tokens":380,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0},"num_sources_used":0},"system_fingerprint":"fp_a6fbae1a97"}'
58 headers:
59 Content-Type:
60 - application/json
61 status: 200 OK
62 code: 200
63 duration: 3.91244875s