Add reasoning_effort field to OpenAI compatible model configuration (#50582)
Vimsucks
created
Some model like glm-5、kimi-k2.5 support reasoning, but require
reasoning_effort parameter
This pr add support for setting reasoing_effort for openai compatible
models
Tested using the following config:
```json
{
"language_models": {
"openai_compatible": {
"My LiteLLM": {
"available_models": [
{
"name": "glm-5",
"display_name": "glm-5",
"max_tokens": 73728,
"reasoning_effort": "low"
},
{
"name": "kimi-k2.5",
"display_name": "kimi-k2.5",
"max_tokens": 262144,
"reasoning_effort": "low"
}
]
}
}
}
}
```
Release Notes:
- Added a setting to control `reasoning_effort` in custom
OpenAI-compatible models