Delegate model capabilities instead of hardcoding them
Richard Feldman
created
The into_open_ai_response call was hardcoding
supports_parallel_tool_calls=true and supports_prompt_cache_key=false
instead of asking the model. This meant reasoning models like GPT-5
Codex variants would incorrectly send parallel_tool_calls=true,
which could cause API errors. Add the missing methods to ChatGptModel
and delegate like the standard OpenAI provider does.