llm-providers.md

  1# LLM Providers
  2
  3To use AI in Zed, you need to have at least one large language model provider set up.
  4
  5You can do that by either subscribing to [one of Zed's plans](./plans-and-usage.md), or by using API keys you already have for the supported providers.
  6
  7## Use Your Own Keys {#use-your-own-keys}
  8
  9If you already have an API key for an existing LLM provider, like Anthropic or OpenAI, you can add them to Zed and use the full power of the Agent Panel **_for free_**.
 10
 11To add an existing API key to a given provider, go to the Agent Panel settings (`agent: open settings`), look for the desired provider, paste the key into the input, and hit enter.
 12
 13> Note: API keys are _not_ stored as plain text in your `settings.json`, but rather in your OS's secure credential storage.
 14
 15## Supported Providers
 16
 17Zed offers an extensive list of "use your own key" LLM providers
 18
 19- [Amazon Bedrock](#amazon-bedrock)
 20- [Anthropic](#anthropic)
 21- [DeepSeek](#deepseek)
 22- [GitHub Copilot Chat](#github-copilot-chat)
 23- [Google AI](#google-ai)
 24- [LM Studio](#lmstudio)
 25- [Mistral](#mistral)
 26- [Ollama](#ollama)
 27- [OpenAI](#openai)
 28- [OpenAI API Compatible](#openai-api-compatible)
 29- [OpenRouter](#openrouter)
 30- [Vercel](#vercel-v0)
 31- [xAI](#xai)
 32
 33### Amazon Bedrock {#amazon-bedrock}
 34
 35> Supports tool use with models that support streaming tool use.
 36> More details can be found in the [Amazon Bedrock's Tool Use documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html).
 37
 38To use Amazon Bedrock's models, an AWS authentication is required.
 39Ensure your credentials have the following permissions set up:
 40
 41- `bedrock:InvokeModelWithResponseStream`
 42- `bedrock:InvokeModel`
 43
 44Your IAM policy should look similar to:
 45
 46```json [settings]
 47{
 48  "Version": "2012-10-17",
 49  "Statement": [
 50    {
 51      "Effect": "Allow",
 52      "Action": [
 53        "bedrock:InvokeModel",
 54        "bedrock:InvokeModelWithResponseStream"
 55      ],
 56      "Resource": "*"
 57    }
 58  ]
 59}
 60```
 61
 62With that done, choose one of the two authentication methods:
 63
 64#### Authentication via Named Profile (Recommended)
 65
 661. Ensure you have the AWS CLI installed and configured with a named profile
 672. Open your `settings.json` (`zed: open settings`) and include the `bedrock` key under `language_models` with the following settings:
 68   ```json [settings]
 69   {
 70     "language_models": {
 71       "bedrock": {
 72         "authentication_method": "named_profile",
 73         "region": "your-aws-region",
 74         "profile": "your-profile-name"
 75       }
 76     }
 77   }
 78   ```
 79
 80#### Authentication via Static Credentials
 81
 82While it's possible to configure through the Agent Panel settings UI by entering your AWS access key and secret directly, we recommend using named profiles instead for better security practices.
 83To do this:
 84
 851. Create an IAM User that you can assume in the [IAM Console](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/users).
 862. Create security credentials for that User, save them and keep them secure.
 873. Open the Agent Configuration with (`agent: open settings`) and go to the Amazon Bedrock section
 884. Copy the credentials from Step 2 into the respective **Access Key ID**, **Secret Access Key**, and **Region** fields.
 89
 90#### Cross-Region Inference
 91
 92The Zed implementation of Amazon Bedrock uses [Cross-Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) for all the models and region combinations that support it.
 93With Cross-Region inference, you can distribute traffic across multiple AWS Regions, enabling higher throughput.
 94
 95For example, if you use `Claude Sonnet 3.7 Thinking` from `us-east-1`, it may be processed across the US regions, namely: `us-east-1`, `us-east-2`, or `us-west-2`.
 96Cross-Region inference requests are kept within the AWS Regions that are part of the geography where the data originally resides.
 97For example, a request made within the US is kept within the AWS Regions in the US.
 98
 99Although the data remains stored only in the source Region, your input prompts and output results might move outside of your source Region during cross-Region inference.
100All data will be transmitted encrypted across Amazon's secure network.
101
102We will support Cross-Region inference for each of the models on a best-effort basis, please refer to the [Cross-Region Inference method Code](https://github.com/zed-industries/zed/blob/main/crates/bedrock/src/models.rs#L297).
103
104For the most up-to-date supported regions and models, refer to the [Supported Models and Regions for Cross Region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html).
105
106### Anthropic {#anthropic}
107
108You can use Anthropic models by choosing them via the model dropdown in the Agent Panel.
109
1101. Sign up for Anthropic and [create an API key](https://console.anthropic.com/settings/keys)
1112. Make sure that your Anthropic account has credits
1123. Open the settings view (`agent: open settings`) and go to the Anthropic section
1134. Enter your Anthropic API key
114
115Even if you pay for Claude Pro, you will still have to [pay for additional credits](https://console.anthropic.com/settings/plans) to use it via the API.
116
117Zed will also use the `ANTHROPIC_API_KEY` environment variable if it's defined.
118
119#### Custom Models {#anthropic-custom-models}
120
121You can add custom models to the Anthropic provider by adding the following to your Zed `settings.json`:
122
123```json [settings]
124{
125  "language_models": {
126    "anthropic": {
127      "available_models": [
128        {
129          "name": "claude-3-5-sonnet-20240620",
130          "display_name": "Sonnet 2024-June",
131          "max_tokens": 128000,
132          "max_output_tokens": 2560,
133          "cache_configuration": {
134            "max_cache_anchors": 10,
135            "min_total_token": 10000,
136            "should_speculate": false
137          },
138          "tool_override": "some-model-that-supports-toolcalling"
139        }
140      ]
141    }
142  }
143}
144```
145
146Custom models will be listed in the model dropdown in the Agent Panel.
147
148You can configure a model to use [extended thinking](https://docs.anthropic.com/en/docs/about-claude/models/extended-thinking-models) (if it supports it) by changing the mode in your model's configuration to `thinking`, for example:
149
150```json [settings]
151{
152  "name": "claude-sonnet-4-latest",
153  "display_name": "claude-sonnet-4-thinking",
154  "max_tokens": 200000,
155  "mode": {
156    "type": "thinking",
157    "budget_tokens": 4096
158  }
159}
160```
161
162### DeepSeek {#deepseek}
163
1641. Visit the DeepSeek platform and [create an API key](https://platform.deepseek.com/api_keys)
1652. Open the settings view (`agent: open settings`) and go to the DeepSeek section
1663. Enter your DeepSeek API key
167
168The DeepSeek API key will be saved in your keychain.
169
170Zed will also use the `DEEPSEEK_API_KEY` environment variable if it's defined.
171
172#### Custom Models {#deepseek-custom-models}
173
174The Zed agent comes pre-configured to use the latest version for common models (DeepSeek Chat, DeepSeek Reasoner).
175If you wish to use alternate models or customize the API endpoint, you can do so by adding the following to your Zed `settings.json`:
176
177```json [settings]
178{
179  "language_models": {
180    "deepseek": {
181      "api_url": "https://api.deepseek.com",
182      "available_models": [
183        {
184          "name": "deepseek-chat",
185          "display_name": "DeepSeek Chat",
186          "max_tokens": 64000
187        },
188        {
189          "name": "deepseek-reasoner",
190          "display_name": "DeepSeek Reasoner",
191          "max_tokens": 64000,
192          "max_output_tokens": 4096
193        }
194      ]
195    }
196  }
197}
198```
199
200Custom models will be listed in the model dropdown in the Agent Panel.
201You can also modify the `api_url` to use a custom endpoint if needed.
202
203### GitHub Copilot Chat {#github-copilot-chat}
204
205You can use GitHub Copilot Chat with the Zed agent by choosing it via the model dropdown in the Agent Panel.
206
2071. Open the settings view (`agent: open settings`) and go to the GitHub Copilot Chat section
2082. Click on `Sign in to use GitHub Copilot`, follow the steps shown in the modal.
209
210Alternatively, you can provide an OAuth token via the `GH_COPILOT_TOKEN` environment variable.
211
212> **Note**: If you don't see specific models in the dropdown, you may need to enable them in your [GitHub Copilot settings](https://github.com/settings/copilot/features).
213
214To use Copilot Enterprise with Zed (for both agent and completions), you must configure your enterprise endpoint as described in [Configuring GitHub Copilot Enterprise](./edit-prediction.md#github-copilot-enterprise).
215
216### Google AI {#google-ai}
217
218You can use Gemini models with the Zed agent by choosing it via the model dropdown in the Agent Panel.
219
2201. Go to the Google AI Studio site and [create an API key](https://aistudio.google.com/app/apikey).
2212. Open the settings view (`agent: open settings`) and go to the Google AI section
2223. Enter your Google AI API key and press enter.
223
224The Google AI API key will be saved in your keychain.
225
226Zed will also use the `GEMINI_API_KEY` environment variable if it's defined. See [Using Gemini API keys](https://ai.google.dev/gemini-api/docs/api-key) in the Gemini docs for more.
227
228#### Custom Models {#google-ai-custom-models}
229
230By default, Zed will use `stable` versions of models, but you can use specific versions of models, including [experimental models](https://ai.google.dev/gemini-api/docs/models/experimental-models). You can configure a model to use [thinking mode](https://ai.google.dev/gemini-api/docs/thinking) (if it supports it) by adding a `mode` configuration to your model. This is useful for controlling reasoning token usage and response speed. If not specified, Gemini will automatically choose the thinking budget.
231
232Here is an example of a custom Google AI model you could add to your Zed `settings.json`:
233
234```json [settings]
235{
236  "language_models": {
237    "google": {
238      "available_models": [
239        {
240          "name": "gemini-2.5-flash-preview-05-20",
241          "display_name": "Gemini 2.5 Flash (Thinking)",
242          "max_tokens": 1000000,
243          "mode": {
244            "type": "thinking",
245            "budget_tokens": 24000
246          }
247        }
248      ]
249    }
250  }
251}
252```
253
254Custom models will be listed in the model dropdown in the Agent Panel.
255
256### LM Studio {#lmstudio}
257
2581. Download and install [the latest version of LM Studio](https://lmstudio.ai/download)
2592. In the app press `cmd/ctrl-shift-m` and download at least one model (e.g., qwen2.5-coder-7b). Alternatively, you can get models via the LM Studio CLI:
260
261   ```sh
262   lms get qwen2.5-coder-7b
263   ```
264
2653. Make sure the LM Studio API server is running by executing:
266
267   ```sh
268   lms server start
269   ```
270
271Tip: Set [LM Studio as a login item](https://lmstudio.ai/docs/advanced/headless#run-the-llm-service-on-machine-login) to automate running the LM Studio server.
272
273### Mistral {#mistral}
274
2751. Visit the Mistral platform and [create an API key](https://console.mistral.ai/api-keys/)
2762. Open the configuration view (`agent: open settings`) and navigate to the Mistral section
2773. Enter your Mistral API key
278
279The Mistral API key will be saved in your keychain.
280
281Zed will also use the `MISTRAL_API_KEY` environment variable if it's defined.
282
283#### Custom Models {#mistral-custom-models}
284
285The Zed agent comes pre-configured with several Mistral models (codestral-latest, mistral-large-latest, mistral-medium-latest, mistral-small-latest, open-mistral-nemo, and open-codestral-mamba).
286All the default models support tool use.
287If you wish to use alternate models or customize their parameters, you can do so by adding the following to your Zed `settings.json`:
288
289```json [settings]
290{
291  "language_models": {
292    "mistral": {
293      "api_url": "https://api.mistral.ai/v1",
294      "available_models": [
295        {
296          "name": "mistral-tiny-latest",
297          "display_name": "Mistral Tiny",
298          "max_tokens": 32000,
299          "max_output_tokens": 4096,
300          "max_completion_tokens": 1024,
301          "supports_tools": true,
302          "supports_images": false
303        }
304      ]
305    }
306  }
307}
308```
309
310Custom models will be listed in the model dropdown in the Agent Panel.
311
312### Ollama {#ollama}
313
314Download and install Ollama from [ollama.com/download](https://ollama.com/download) (Linux or macOS) and ensure it's running with `ollama --version`.
315
3161. Download one of the [available models](https://ollama.com/models), for example, for `mistral`:
317
318   ```sh
319   ollama pull mistral
320   ```
321
3222. Make sure that the Ollama server is running. You can start it either via running Ollama.app (macOS) or launching:
323
324   ```sh
325   ollama serve
326   ```
327
3283. In the Agent Panel, select one of the Ollama models using the model dropdown.
329
330#### Ollama Context Length {#ollama-context}
331
332Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models.
333Zed API requests to Ollama include this as the `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of RAM are able to use most models out of the box.
334
335See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
336
337> **Note**: Token counts displayed in the Agent Panel are only estimates and will differ from the model's native tokenizer.
338
339Depending on your hardware or use-case you may wish to limit or increase the context length for a specific model via settings.json:
340
341```json [settings]
342{
343  "language_models": {
344    "ollama": {
345      "api_url": "http://localhost:11434",
346      "available_models": [
347        {
348          "name": "qwen2.5-coder",
349          "display_name": "qwen 2.5 coder 32K",
350          "max_tokens": 32768,
351          "supports_tools": true,
352          "supports_thinking": true,
353          "supports_images": true
354        }
355      ]
356    }
357  }
358}
359```
360
361If you specify a context length that is too large for your hardware, Ollama will log an error.
362You can watch these logs by running: `tail -f ~/.ollama/logs/ollama.log` (macOS) or `journalctl -u ollama -f` (Linux).
363Depending on the memory available on your machine, you may need to adjust the context length to a smaller value.
364
365You may also optionally specify a value for `keep_alive` for each available model.
366This can be an integer (seconds) or alternatively a string duration like "5m", "10m", "1h", "1d", etc.
367For example, `"keep_alive": "120s"` will allow the remote server to unload the model (freeing up GPU VRAM) after 120 seconds.
368
369The `supports_tools` option controls whether the model will use additional tools.
370If the model is tagged with `tools` in the Ollama catalog, this option should be supplied, and the built-in profiles `Ask` and `Write` can be used.
371If the model is not tagged with `tools` in the Ollama catalog, this option can still be supplied with the value `true`; however, be aware that only the `Minimal` built-in profile will work.
372
373The `supports_thinking` option controls whether the model will perform an explicit "thinking" (reasoning) pass before producing its final answer.
374If the model is tagged with `thinking` in the Ollama catalog, set this option and you can use it in Zed.
375
376The `supports_images` option enables the model's vision capabilities, allowing it to process images included in the conversation context.
377If the model is tagged with `vision` in the Ollama catalog, set this option and you can use it in Zed.
378
379#### Ollama Authentication
380
381In addition to running Ollama on your own hardware, which generally does not require authentication, Zed also supports connecting to remote Ollama instances. API keys are required for authentication.
382
383One such service is [Ollama Turbo])(https://ollama.com/turbo). To configure Zed to use Ollama turbo:
384
3851. Sign in to your Ollama account and subscribe to Ollama Turbo
3862. Visit [ollama.com/settings/keys](https://ollama.com/settings/keys) and create an API key
3873. Open the settings view (`agent: open settings`) and go to the Ollama section
3884. Paste your API key and press enter.
3895. For the API URL enter `https://ollama.com`
390
391Zed will also use the `OLLAMA_API_KEY` environment variables if defined.
392
393### OpenAI {#openai}
394
3951. Visit the OpenAI platform and [create an API key](https://platform.openai.com/account/api-keys)
3962. Make sure that your OpenAI account has credits
3973. Open the settings view (`agent: open settings`) and go to the OpenAI section
3984. Enter your OpenAI API key
399
400The OpenAI API key will be saved in your keychain.
401
402Zed will also use the `OPENAI_API_KEY` environment variable if it's defined.
403
404#### Custom Models {#openai-custom-models}
405
406The Zed agent comes pre-configured to use the latest version for common models (GPT-5, GPT-5 mini, o4-mini, GPT-4.1, and others).
407To use alternate models, perhaps a preview release, or if you wish to control the request parameters, you can do so by adding the following to your Zed `settings.json`:
408
409```json [settings]
410{
411  "language_models": {
412    "openai": {
413      "available_models": [
414        {
415          "name": "gpt-5",
416          "display_name": "gpt-5 high",
417          "reasoning_effort": "high",
418          "max_tokens": 272000,
419          "max_completion_tokens": 20000
420        },
421        {
422          "name": "gpt-4o-2024-08-06",
423          "display_name": "GPT 4o Summer 2024",
424          "max_tokens": 128000
425        }
426      ]
427    }
428  }
429}
430```
431
432You must provide the model's context window in the `max_tokens` parameter; this can be found in the [OpenAI model documentation](https://platform.openai.com/docs/models).
433
434OpenAI `o1` models should set `max_completion_tokens` as well to avoid incurring high reasoning token costs.
435Custom models will be listed in the model dropdown in the Agent Panel.
436
437### OpenAI API Compatible {#openai-api-compatible}
438
439Zed supports using [OpenAI compatible APIs](https://platform.openai.com/docs/api-reference/chat) by specifying a custom `api_url` and `available_models` for the OpenAI provider.
440This is useful for connecting to other hosted services (like Together AI, Anyscale, etc.) or local models.
441
442You can add a custom, OpenAI-compatible model either via the UI or by editing your `settings.json`.
443
444To do it via the UI, go to the Agent Panel settings (`agent: open settings`) and look for the "Add Provider" button to the right of the "LLM Providers" section title.
445Then, fill up the input fields available in the modal.
446
447To do it via your `settings.json`, add the following snippet under `language_models`:
448
449```json [settings]
450{
451  "language_models": {
452    "openai_compatible": {
453      // Using Together AI as an example
454      "Together AI": {
455        "api_url": "https://api.together.xyz/v1",
456        "available_models": [
457          {
458            "name": "mistralai/Mixtral-8x7B-Instruct-v0.1",
459            "display_name": "Together Mixtral 8x7B",
460            "max_tokens": 32768,
461            "capabilities": {
462              "tools": true,
463              "images": false,
464              "parallel_tool_calls": false,
465              "prompt_cache_key": false
466            }
467          }
468        ]
469      }
470    }
471  }
472}
473```
474
475By default, OpenAI-compatible models inherit the following capabilities:
476
477- `tools`: true (supports tool/function calling)
478- `images`: false (does not support image inputs)
479- `parallel_tool_calls`: false (does not support `parallel_tool_calls` parameter)
480- `prompt_cache_key`: false (does not support `prompt_cache_key` parameter)
481
482Note that LLM API keys aren't stored in your settings file.
483So, ensure you have it set in your environment variables (`<PROVIDER_NAME>_API_KEY=<your api key>`) so your settings can pick it up. In the example above, it would be `TOGETHER_AI_API_KEY=<your api key>`.
484
485### OpenRouter {#openrouter}
486
487OpenRouter provides access to multiple AI models through a single API. It supports tool use for compatible models.
488
4891. Visit [OpenRouter](https://openrouter.ai) and create an account
4902. Generate an API key from your [OpenRouter keys page](https://openrouter.ai/keys)
4913. Open the settings view (`agent: open settings`) and go to the OpenRouter section
4924. Enter your OpenRouter API key
493
494The OpenRouter API key will be saved in your keychain.
495
496Zed will also use the `OPENROUTER_API_KEY` environment variable if it's defined.
497
498#### Custom Models {#openrouter-custom-models}
499
500You can add custom models to the OpenRouter provider by adding the following to your Zed `settings.json`:
501
502```json [settings]
503{
504  "language_models": {
505    "open_router": {
506      "api_url": "https://openrouter.ai/api/v1",
507      "available_models": [
508        {
509          "name": "google/gemini-2.0-flash-thinking-exp",
510          "display_name": "Gemini 2.0 Flash (Thinking)",
511          "max_tokens": 200000,
512          "max_output_tokens": 8192,
513          "supports_tools": true,
514          "supports_images": true,
515          "mode": {
516            "type": "thinking",
517            "budget_tokens": 8000
518          }
519        }
520      ]
521    }
522  }
523}
524```
525
526The available configuration options for each model are:
527
528- `name` (required): The model identifier used by OpenRouter
529- `display_name` (optional): A human-readable name shown in the UI
530- `max_tokens` (required): The model's context window size
531- `max_output_tokens` (optional): Maximum tokens the model can generate
532- `max_completion_tokens` (optional): Maximum completion tokens
533- `supports_tools` (optional): Whether the model supports tool/function calling
534- `supports_images` (optional): Whether the model supports image inputs
535- `mode` (optional): Special mode configuration for thinking models
536
537You can find available models and their specifications on the [OpenRouter models page](https://openrouter.ai/models).
538
539Custom models will be listed in the model dropdown in the Agent Panel.
540
541#### Provider Routing
542
543You can optionally control how OpenRouter routes a given custom model request among underlying upstream providers via the `provider` object on each model entry.
544
545Supported fields (all optional):
546
547- `order`: Array of provider slugs to try first, in order (e.g. `["anthropic", "openai"]`)
548- `allow_fallbacks` (default: `true`): Whether fallback providers may be used if preferred ones are unavailable
549- `require_parameters` (default: `false`): Only use providers that support every parameter you supplied
550- `data_collection` (default: `allow`): `"allow"` or `"disallow"` (controls use of providers that may store data)
551- `only`: Whitelist of provider slugs allowed for this request
552- `ignore`: Provider slugs to skip
553- `quantizations`: Restrict to specific quantization variants (e.g. `["int4","int8"]`)
554- `sort`: Sort strategy for candidate providers (e.g. `"price"` or `"throughput"`)
555
556Example adding routing preferences to a model:
557
558```json [settings]
559{
560  "language_models": {
561    "open_router": {
562      "api_url": "https://openrouter.ai/api/v1",
563      "available_models": [
564        {
565          "name": "openrouter/auto",
566          "display_name": "Auto Router (Tools Preferred)",
567          "max_tokens": 2000000,
568          "supports_tools": true,
569          "provider": {
570            "order": ["anthropic", "openai"],
571            "allow_fallbacks": true,
572            "require_parameters": true,
573            "only": ["anthropic", "openai", "google"],
574            "ignore": ["cohere"],
575            "quantizations": ["int8"],
576            "sort": "price",
577            "data_collection": "allow"
578          }
579        }
580      ]
581    }
582  }
583}
584```
585
586These routing controls let you fine‑tune cost, capability, and reliability trade‑offs without changing the model name you select in the UI.
587
588### Vercel v0 {#vercel-v0}
589
590[Vercel v0](https://vercel.com/docs/v0/api) is an expert model for generating full-stack apps, with framework-aware completions optimized for modern stacks like Next.js and Vercel.
591It supports text and image inputs and provides fast streaming responses.
592
593The v0 models are [OpenAI-compatible models](/#openai-api-compatible), but Vercel is listed as first-class provider in the panel's settings view.
594
595To start using it with Zed, ensure you have first created a [v0 API key](https://v0.dev/chat/settings/keys).
596Once you have it, paste it directly into the Vercel provider section in the panel's settings view.
597
598You should then find it as `v0-1.5-md` in the model dropdown in the Agent Panel.
599
600### xAI {#xai}
601
602Zed has first-class support for [xAI](https://x.ai/) models. You can use your own API key to access Grok models.
603
6041. [Create an API key in the xAI Console](https://console.x.ai/team/default/api-keys)
6052. Open the settings view (`agent: open settings`) and go to the **xAI** section
6063. Enter your xAI API key
607
608The xAI API key will be saved in your keychain. Zed will also use the `XAI_API_KEY` environment variable if it's defined.
609
610> **Note:** While the xAI API is OpenAI-compatible, Zed has first-class support for it as a dedicated provider. For the best experience, we recommend using the dedicated `x_ai` provider configuration instead of the [OpenAI API Compatible](#openai-api-compatible) method.
611
612#### Custom Models {#xai-custom-models}
613
614The Zed agent comes pre-configured with common Grok models. If you wish to use alternate models or customize their parameters, you can do so by adding the following to your Zed `settings.json`:
615
616```json [settings]
617{
618  "language_models": {
619    "x_ai": {
620      "api_url": "https://api.x.ai/v1",
621      "available_models": [
622        {
623          "name": "grok-1.5",
624          "display_name": "Grok 1.5",
625          "max_tokens": 131072,
626          "max_output_tokens": 8192
627        },
628        {
629          "name": "grok-1.5v",
630          "display_name": "Grok 1.5V (Vision)",
631          "max_tokens": 131072,
632          "max_output_tokens": 8192,
633          "supports_images": true
634        }
635      ]
636    }
637  }
638}
639```
640
641## Custom Provider Endpoints {#custom-provider-endpoint}
642
643You can use a custom API endpoint for different providers, as long as it's compatible with the provider's API structure.
644To do so, add the following to your `settings.json`:
645
646```json [settings]
647{
648  "language_models": {
649    "some-provider": {
650      "api_url": "http://localhost:11434"
651    }
652  }
653}
654```
655
656Currently, `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
657
658This is the same infrastructure that powers models that are, for example, [OpenAI-compatible](#openai-api-compatible).