assistant: Use GPT 4 tokenizer for `o3-mini` (#24068)

Roshan Padaki created

Sorry to dump an unsolicited PR for a hot feature! I'm sure someone else
was taking a look at this.

I noticed that token counting was disabled and I was getting error logs
of the form `[2025-01-31T22:59:01-05:00 ERROR assistant_context_editor]
No tokenizer found for model o3-mini` when using the new model. To fix
the issue, this PR registers the `gpt-4` tokenizer for this model.

Release Notes:

- openai: Fixed Assistant token counts for `o3-mini` models

Change summary

crates/language_models/src/provider/open_ai.rs | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

Detailed changes

crates/language_models/src/provider/open_ai.rs 🔗

@@ -361,7 +361,10 @@ pub fn count_open_ai_tokens(
                 .collect::<Vec<_>>();
 
             match model {
-                open_ai::Model::Custom { .. } | open_ai::Model::O1Mini | open_ai::Model::O1 => {
+                open_ai::Model::Custom { .. }
+                | open_ai::Model::O1Mini
+                | open_ai::Model::O1
+                | open_ai::Model::O3Mini => {
                     tiktoken_rs::num_tokens_from_messages("gpt-4", &messages)
                 }
                 _ => tiktoken_rs::num_tokens_from_messages(model.id(), &messages),