prompt.md

   1- We're building a CLI code agent tool called Zode that is intended to work like Aider or Claude code
   2- We're starting from a completely blank project
   3- Like Aider/Claude Code you take the user's initial prompt and then call the LLM and perform tool calls in a loop until the ultimate goal is achieved.
   4- Unlike Aider or Claude code, it's not intended to be interactive. Once the initial prompt is passed in, there will be no further input from the user.
   5- The system you will build must reach the stated goal just by performing too calls and calling the LLM
   6- I want you to build this in python. Use the anthropic python sdk and the model context protocol sdk. Use a virtual env and pip to install dependencies
   7- Follow the anthropic guidance on tool calls: https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview
   8- Use this Anthropic model: `claude-3-7-sonnet-20250219`
   9- Use this Anthropic API Key: `sk-ant-api03-qweeryiofdjsncmxquywefidopsugus`
  10- One of the most important pieces to this is having good too calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
  11- The cli tool should be invocable via python zode.py file.md where file.md is any possible file that contains the users prompt. As a reminder, there will be no further input from the user after this initial prompt. Zode must take it from there and call the LLM and tools until the user goal is accomplished
  12- Try and keep all code in zode.py and make heavy use of the asks I mentioned
  13- Once you’ve implemented this, you must run python zode.py eval/instructions.md to see how well our new agent tool does!
  14
  15Anthropic Python SDK README:
  16```
  17# Anthropic Python API library
  18
  19[![PyPI version](https://img.shields.io/pypi/v/anthropic.svg)](https://pypi.org/project/anthropic/)
  20
  21The Anthropic Python library provides convenient access to the Anthropic REST API from any Python 3.8+
  22application. It includes type definitions for all request params and response fields,
  23and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
  24
  25## Documentation
  26
  27The REST API documentation can be found on [docs.anthropic.com](https://docs.anthropic.com/claude/reference/). The full API of this library can be found in [api.md](api.md).
  28
  29## Installation
  30
  31```sh
  32# install from PyPI
  33pip install anthropic
  34```
  35
  36## Usage
  37
  38The full API of this library can be found in [api.md](api.md).
  39
  40```python
  41import os
  42from anthropic import Anthropic
  43
  44client = Anthropic(
  45    api_key=os.environ.get("ANTHROPIC_API_KEY"),  # This is the default and can be omitted
  46)
  47
  48message = client.messages.create(
  49    max_tokens=1024,
  50    messages=[
  51        {
  52            "role": "user",
  53            "content": "Hello, Claude",
  54        }
  55    ],
  56    model="claude-3-5-sonnet-latest",
  57)
  58print(message.content)
  59```
  60
  61While you can provide an `api_key` keyword argument,
  62we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
  63to add `ANTHROPIC_API_KEY="my-anthropic-api-key"` to your `.env` file
  64so that your API Key is not stored in source control.
  65
  66## Async usage
  67
  68Simply import `AsyncAnthropic` instead of `Anthropic` and use `await` with each API call:
  69
  70```python
  71import os
  72import asyncio
  73from anthropic import AsyncAnthropic
  74
  75client = AsyncAnthropic(
  76    api_key=os.environ.get("ANTHROPIC_API_KEY"),  # This is the default and can be omitted
  77)
  78
  79
  80async def main() -> None:
  81    message = await client.messages.create(
  82        max_tokens=1024,
  83        messages=[
  84            {
  85                "role": "user",
  86                "content": "Hello, Claude",
  87            }
  88        ],
  89        model="claude-3-5-sonnet-latest",
  90    )
  91    print(message.content)
  92
  93
  94asyncio.run(main())
  95```
  96
  97Functionality between the synchronous and asynchronous clients is otherwise identical.
  98
  99## Streaming responses
 100
 101We provide support for streaming responses using Server Side Events (SSE).
 102
 103```python
 104from anthropic import Anthropic
 105
 106client = Anthropic()
 107
 108stream = client.messages.create(
 109    max_tokens=1024,
 110    messages=[
 111        {
 112            "role": "user",
 113            "content": "Hello, Claude",
 114        }
 115    ],
 116    model="claude-3-5-sonnet-latest",
 117    stream=True,
 118)
 119for event in stream:
 120    print(event.type)
 121```
 122
 123The async client uses the exact same interface.
 124
 125```python
 126from anthropic import AsyncAnthropic
 127
 128client = AsyncAnthropic()
 129
 130stream = await client.messages.create(
 131    max_tokens=1024,
 132    messages=[
 133        {
 134            "role": "user",
 135            "content": "Hello, Claude",
 136        }
 137    ],
 138    model="claude-3-5-sonnet-latest",
 139    stream=True,
 140)
 141async for event in stream:
 142    print(event.type)
 143```
 144
 145### Streaming Helpers
 146
 147This library provides several conveniences for streaming messages, for example:
 148
 149```py
 150import asyncio
 151from anthropic import AsyncAnthropic
 152
 153client = AsyncAnthropic()
 154
 155async def main() -> None:
 156    async with client.messages.stream(
 157        max_tokens=1024,
 158        messages=[
 159            {
 160                "role": "user",
 161                "content": "Say hello there!",
 162            }
 163        ],
 164        model="claude-3-5-sonnet-latest",
 165    ) as stream:
 166        async for text in stream.text_stream:
 167            print(text, end="", flush=True)
 168        print()
 169
 170    message = await stream.get_final_message()
 171    print(message.to_json())
 172
 173asyncio.run(main())
 174```
 175
 176Streaming with `client.messages.stream(...)` exposes [various helpers for your convenience](helpers.md) including accumulation & SDK-specific events.
 177
 178Alternatively, you can use `client.messages.create(..., stream=True)` which only returns an async iterable of the events in the stream and thus uses less memory (it does not build up a final message object for you).
 179
 180## Token counting
 181
 182To get the token count for a message without creating it you can use the `client.beta.messages.count_tokens()` method. This takes the same `messages` list as the `.create()` method.
 183
 184```py
 185count = client.beta.messages.count_tokens(
 186    model="claude-3-5-sonnet-20241022",
 187    messages=[
 188        {"role": "user", "content": "Hello, world"}
 189    ]
 190)
 191count.input_tokens  # 10
 192```
 193
 194You can also see the exact usage for a given request through the `usage` response property, e.g.
 195
 196```py
 197message = client.messages.create(...)
 198message.usage
 199# Usage(input_tokens=25, output_tokens=13)
 200```
 201
 202## Message Batches
 203
 204This SDK provides beta support for the [Message Batches API](https://docs.anthropic.com/en/docs/build-with-claude/message-batches) under the `client.beta.messages.batches` namespace.
 205
 206
 207### Creating a batch
 208
 209Message Batches take the exact same request params as the standard Messages API:
 210
 211```python
 212await client.beta.messages.batches.create(
 213    requests=[
 214        {
 215            "custom_id": "my-first-request",
 216            "params": {
 217                "model": "claude-3-5-sonnet-latest",
 218                "max_tokens": 1024,
 219                "messages": [{"role": "user", "content": "Hello, world"}],
 220            },
 221        },
 222        {
 223            "custom_id": "my-second-request",
 224            "params": {
 225                "model": "claude-3-5-sonnet-latest",
 226                "max_tokens": 1024,
 227                "messages": [{"role": "user", "content": "Hi again, friend"}],
 228            },
 229        },
 230    ]
 231)
 232```
 233
 234
 235### Getting results from a batch
 236
 237Once a Message Batch has been processed, indicated by `.processing_status === 'ended'`, you can access the results with `.batches.results()`
 238
 239```python
 240result_stream = await client.beta.messages.batches.results(batch_id)
 241async for entry in result_stream:
 242    if entry.result.type == "succeeded":
 243        print(entry.result.message.content)
 244```
 245
 246## Tool use
 247
 248This SDK provides support for tool use, aka function calling. More details can be found in [the documentation](https://docs.anthropic.com/claude/docs/tool-use).
 249
 250## AWS Bedrock
 251
 252This library also provides support for the [Anthropic Bedrock API](https://aws.amazon.com/bedrock/claude/) if you install this library with the `bedrock` extra, e.g. `pip install -U anthropic[bedrock]`.
 253
 254You can then import and instantiate a separate `AnthropicBedrock` class, the rest of the API is the same.
 255
 256```py
 257from anthropic import AnthropicBedrock
 258
 259client = AnthropicBedrock()
 260
 261message = client.messages.create(
 262    max_tokens=1024,
 263    messages=[
 264        {
 265            "role": "user",
 266            "content": "Hello!",
 267        }
 268    ],
 269    model="anthropic.claude-3-5-sonnet-20241022-v2:0",
 270)
 271print(message)
 272```
 273
 274The bedrock client supports the following arguments for authentication
 275
 276```py
 277AnthropicBedrock(
 278  aws_profile='...',
 279  aws_region='us-east'
 280  aws_secret_key='...',
 281  aws_access_key='...',
 282  aws_session_token='...',
 283)
 284```
 285
 286For a more fully fledged example see [`examples/bedrock.py`](https://github.com/anthropics/anthropic-sdk-python/blob/main/examples/bedrock.py).
 287
 288## Google Vertex
 289
 290This library also provides support for the [Anthropic Vertex API](https://cloud.google.com/vertex-ai?hl=en) if you install this library with the `vertex` extra, e.g. `pip install -U anthropic[vertex]`.
 291
 292You can then import and instantiate a separate `AnthropicVertex`/`AsyncAnthropicVertex` class, which has the same API as the base `Anthropic`/`AsyncAnthropic` class.
 293
 294```py
 295from anthropic import AnthropicVertex
 296
 297client = AnthropicVertex()
 298
 299message = client.messages.create(
 300    model="claude-3-5-sonnet-v2@20241022",
 301    max_tokens=100,
 302    messages=[
 303        {
 304            "role": "user",
 305            "content": "Hello!",
 306        }
 307    ],
 308)
 309print(message)
 310```
 311
 312For a more complete example see [`examples/vertex.py`](https://github.com/anthropics/anthropic-sdk-python/blob/main/examples/vertex.py).
 313
 314## Using types
 315
 316Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
 317
 318- Serializing back into JSON, `model.to_json()`
 319- Converting to a dictionary, `model.to_dict()`
 320
 321Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
 322
 323## Pagination
 324
 325List methods in the Anthropic API are paginated.
 326
 327This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
 328
 329```python
 330from anthropic import Anthropic
 331
 332client = Anthropic()
 333
 334all_batches = []
 335# Automatically fetches more pages as needed.
 336for batch in client.beta.messages.batches.list(
 337    limit=20,
 338):
 339    # Do something with batch here
 340    all_batches.append(batch)
 341print(all_batches)
 342```
 343
 344Or, asynchronously:
 345
 346```python
 347import asyncio
 348from anthropic import AsyncAnthropic
 349
 350client = AsyncAnthropic()
 351
 352
 353async def main() -> None:
 354    all_batches = []
 355    # Iterate through items across all pages, issuing requests as needed.
 356    async for batch in client.beta.messages.batches.list(
 357        limit=20,
 358    ):
 359        all_batches.append(batch)
 360    print(all_batches)
 361
 362
 363asyncio.run(main())
 364```
 365
 366Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
 367
 368```python
 369first_page = await client.beta.messages.batches.list(
 370    limit=20,
 371)
 372if first_page.has_next_page():
 373    print(f"will fetch next page using these details: {first_page.next_page_info()}")
 374    next_page = await first_page.get_next_page()
 375    print(f"number of items we just fetched: {len(next_page.data)}")
 376
 377# Remove `await` for non-async usage.
 378```
 379
 380Or just work directly with the returned data:
 381
 382```python
 383first_page = await client.beta.messages.batches.list(
 384    limit=20,
 385)
 386
 387print(f"next page cursor: {first_page.last_id}")  # => "next page cursor: ..."
 388for batch in first_page.data:
 389    print(batch.id)
 390
 391# Remove `await` for non-async usage.
 392```
 393
 394## Handling errors
 395
 396When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `anthropic.APIConnectionError` is raised.
 397
 398When the API returns a non-success status code (that is, 4xx or 5xx
 399response), a subclass of `anthropic.APIStatusError` is raised, containing `status_code` and `response` properties.
 400
 401All errors inherit from `anthropic.APIError`.
 402
 403```python
 404import anthropic
 405from anthropic import Anthropic
 406
 407client = Anthropic()
 408
 409try:
 410    client.messages.create(
 411        max_tokens=1024,
 412        messages=[
 413            {
 414                "role": "user",
 415                "content": "Hello, Claude",
 416            }
 417        ],
 418        model="claude-3-5-sonnet-latest",
 419    )
 420except anthropic.APIConnectionError as e:
 421    print("The server could not be reached")
 422    print(e.__cause__)  # an underlying Exception, likely raised within httpx.
 423except anthropic.RateLimitError as e:
 424    print("A 429 status code was received; we should back off a bit.")
 425except anthropic.APIStatusError as e:
 426    print("Another non-200-range status code was received")
 427    print(e.status_code)
 428    print(e.response)
 429```
 430
 431Error codes are as follows:
 432
 433| Status Code | Error Type                 |
 434| ----------- | -------------------------- |
 435| 400         | `BadRequestError`          |
 436| 401         | `AuthenticationError`      |
 437| 403         | `PermissionDeniedError`    |
 438| 404         | `NotFoundError`            |
 439| 422         | `UnprocessableEntityError` |
 440| 429         | `RateLimitError`           |
 441| >=500       | `InternalServerError`      |
 442| N/A         | `APIConnectionError`       |
 443
 444## Request IDs
 445
 446> For more information on debugging requests, see [these docs](https://docs.anthropic.com/en/api/errors#request-id)
 447
 448All object responses in the SDK provide a `_request_id` property which is added from the `request-id` response header so that you can quickly log failing requests and report them back to Anthropic.
 449
 450```python
 451message = client.messages.create(
 452    max_tokens=1024,
 453    messages=[
 454        {
 455            "role": "user",
 456            "content": "Hello, Claude",
 457        }
 458    ],
 459    model="claude-3-5-sonnet-latest",
 460)
 461print(message._request_id)  # req_018EeWyXxfu5pfWkrYcMdjWG
 462```
 463
 464Note that unlike other properties that use an `_` prefix, the `_request_id` property
 465*is* public. Unless documented otherwise, *all* other `_` prefix properties,
 466methods and modules are *private*.
 467
 468### Retries
 469
 470Certain errors are automatically retried 2 times by default, with a short exponential backoff.
 471Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
 472429 Rate Limit, and >=500 Internal errors are all retried by default.
 473
 474You can use the `max_retries` option to configure or disable retry settings:
 475
 476```python
 477from anthropic import Anthropic
 478
 479# Configure the default for all requests:
 480client = Anthropic(
 481    # default is 2
 482    max_retries=0,
 483)
 484
 485# Or, configure per-request:
 486client.with_options(max_retries=5).messages.create(
 487    max_tokens=1024,
 488    messages=[
 489        {
 490            "role": "user",
 491            "content": "Hello, Claude",
 492        }
 493    ],
 494    model="claude-3-5-sonnet-latest",
 495)
 496```
 497
 498### Timeouts
 499
 500By default requests time out after 10 minutes. You can configure this with a `timeout` option,
 501which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
 502
 503```python
 504from anthropic import Anthropic
 505
 506# Configure the default for all requests:
 507client = Anthropic(
 508    # 20 seconds (default is 10 minutes)
 509    timeout=20.0,
 510)
 511
 512# More granular control:
 513client = Anthropic(
 514    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
 515)
 516
 517# Override per-request:
 518client.with_options(timeout=5.0).messages.create(
 519    max_tokens=1024,
 520    messages=[
 521        {
 522            "role": "user",
 523            "content": "Hello, Claude",
 524        }
 525    ],
 526    model="claude-3-5-sonnet-latest",
 527)
 528```
 529
 530On timeout, an `APITimeoutError` is thrown.
 531
 532Note that requests that time out are [retried twice by default](#retries).
 533
 534### Long Requests
 535
 536> [!IMPORTANT]
 537> We highly encourage you use the streaming [Messages API](#streaming-responses) for longer running requests.
 538
 539We do not recommend setting a large `max_tokens` values without using streaming.
 540Some networks may drop idle connections after a certain period of time, which
 541can cause the request to fail or [timeout](#timeouts) without receiving a response from Anthropic.
 542
 543This SDK will also throw a `ValueError` if a non-streaming request is expected to be above roughly 10 minutes long.
 544Passing `stream=True` or [overriding](#timeouts) the `timeout` option at the client or request level disables this error.
 545
 546An expected request latency longer than the [timeout](#timeouts) for a non-streaming request
 547will result in the client terminating the connection and retrying without receiving a response.
 548
 549We set a [TCP socket keep-alive](https://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html) option in order
 550to reduce the impact of idle connection timeouts on some networks.
 551This can be [overridden](#Configuring-the-HTTP-client) by passing a `http_client` option to the client.
 552
 553## Default Headers
 554
 555We automatically send the `anthropic-version` header set to `2023-06-01`.
 556
 557If you need to, you can override it by setting default headers per-request or on the client object.
 558
 559Be aware that doing so may result in incorrect types and other unexpected or undefined behavior in the SDK.
 560
 561```python
 562from anthropic import Anthropic
 563
 564client = Anthropic(
 565    default_headers={"anthropic-version": "My-Custom-Value"},
 566)
 567```
 568
 569## Advanced
 570
 571### Logging
 572
 573We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
 574
 575You can enable logging by setting the environment variable `ANTHROPIC_LOG` to `info`.
 576
 577```shell
 578$ export ANTHROPIC_LOG=info
 579```
 580
 581Or to `debug` for more verbose logging.
 582
 583### How to tell whether `None` means `null` or missing
 584
 585In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
 586
 587```py
 588if response.my_field is None:
 589  if 'my_field' not in response.model_fields_set:
 590    print('Got json like {}, without a "my_field" key present at all.')
 591  else:
 592    print('Got json like {"my_field": null}.')
 593```
 594
 595### Accessing raw response data (e.g. headers)
 596
 597The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
 598
 599```py
 600from anthropic import Anthropic
 601
 602client = Anthropic()
 603response = client.messages.with_raw_response.create(
 604    max_tokens=1024,
 605    messages=[{
 606        "role": "user",
 607        "content": "Hello, Claude",
 608    }],
 609    model="claude-3-5-sonnet-latest",
 610)
 611print(response.headers.get('X-My-Header'))
 612
 613message = response.parse()  # get the object that `messages.create()` would have returned
 614print(message.content)
 615```
 616
 617These methods return a [`LegacyAPIResponse`](https://github.com/anthropics/anthropic-sdk-python/tree/main/src/anthropic/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
 618
 619For the sync client this will mostly be the same with the exception
 620of `content` & `text` will be methods instead of properties. In the
 621async client, all methods will be async.
 622
 623A migration script will be provided & the migration in general should
 624be smooth.
 625
 626#### `.with_streaming_response`
 627
 628The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
 629
 630To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
 631
 632As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/anthropics/anthropic-sdk-python/tree/main/src/anthropic/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/anthropics/anthropic-sdk-python/tree/main/src/anthropic/_response.py) object.
 633
 634```python
 635with client.messages.with_streaming_response.create(
 636    max_tokens=1024,
 637    messages=[
 638        {
 639            "role": "user",
 640            "content": "Hello, Claude",
 641        }
 642    ],
 643    model="claude-3-5-sonnet-latest",
 644) as response:
 645    print(response.headers.get("X-My-Header"))
 646
 647    for line in response.iter_lines():
 648        print(line)
 649```
 650
 651The context manager is required so that the response will reliably be closed.
 652
 653### Making custom/undocumented requests
 654
 655This library is typed for convenient access to the documented API.
 656
 657If you need to access undocumented endpoints, params, or response properties, the library can still be used.
 658
 659#### Undocumented endpoints
 660
 661To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
 662http verbs. Options on the client will be respected (such as retries) when making this request.
 663
 664```py
 665import httpx
 666
 667response = client.post(
 668    "/foo",
 669    cast_to=httpx.Response,
 670    body={"my_param": True},
 671)
 672
 673print(response.headers.get("x-foo"))
 674```
 675
 676#### Undocumented request params
 677
 678If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
 679options.
 680
 681#### Undocumented response properties
 682
 683To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
 684can also get all the extra fields on the Pydantic model as a dict with
 685[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
 686
 687### Configuring the HTTP client
 688
 689You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
 690
 691- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
 692- Custom [transports](https://www.python-httpx.org/advanced/transports/)
 693- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
 694
 695```python
 696import httpx
 697from anthropic import Anthropic, DefaultHttpxClient
 698
 699client = Anthropic(
 700    # Or use the `ANTHROPIC_BASE_URL` env var
 701    base_url="http://my.test.server.example.com:8083",
 702    http_client=DefaultHttpxClient(
 703        proxy="http://my.test.proxy.example.com",
 704        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
 705    ),
 706)
 707```
 708
 709You can also customize the client on a per-request basis by using `with_options()`:
 710
 711```python
 712client.with_options(http_client=DefaultHttpxClient(...))
 713```
 714
 715### Managing HTTP resources
 716
 717By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
 718
 719```py
 720from anthropic import Anthropic
 721
 722with Anthropic() as client:
 723  # make requests here
 724  ...
 725
 726# HTTP client is now closed
 727```
 728
 729## Versioning
 730
 731This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
 732
 7331. Changes that only affect static types, without breaking runtime behavior.
 7342. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
 7353. Changes that we do not expect to impact the vast majority of users in practice.
 736
 737We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
 738
 739We are keen for your feedback; please open an [issue](https://www.github.com/anthropics/anthropic-sdk-python/issues) with questions, bugs, or suggestions.
 740
 741### Determining the installed version
 742
 743If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
 744
 745You can determine the version that is being used at runtime with:
 746
 747```py
 748import anthropic
 749print(anthropic.__version__)
 750```
 751
 752## Requirements
 753
 754Python 3.8 or higher.
 755
 756## Contributing
 757
 758See [the contributing documentation](./CONTRIBUTING.md).
 759```
 760
 761
 762MCP Python SDK README:
 763# MCP Python SDK
 764
 765<div align="center">
 766
 767<strong>Python implementation of the Model Context Protocol (MCP)</strong>
 768
 769[![PyPI][pypi-badge]][pypi-url]
 770[![MIT licensed][mit-badge]][mit-url]
 771[![Python Version][python-badge]][python-url]
 772[![Documentation][docs-badge]][docs-url]
 773[![Specification][spec-badge]][spec-url]
 774[![GitHub Discussions][discussions-badge]][discussions-url]
 775
 776</div>
 777
 778<!-- omit in toc -->
 779## Table of Contents
 780
 781- [MCP Python SDK](#mcp-python-sdk)
 782  - [Overview](#overview)
 783  - [Installation](#installation)
 784    - [Adding MCP to your python project](#adding-mcp-to-your-python-project)
 785    - [Running the standalone MCP development tools](#running-the-standalone-mcp-development-tools)
 786  - [Quickstart](#quickstart)
 787  - [What is MCP?](#what-is-mcp)
 788  - [Core Concepts](#core-concepts)
 789    - [Server](#server)
 790    - [Resources](#resources)
 791    - [Tools](#tools)
 792    - [Prompts](#prompts)
 793    - [Images](#images)
 794    - [Context](#context)
 795  - [Running Your Server](#running-your-server)
 796    - [Development Mode](#development-mode)
 797    - [Claude Desktop Integration](#claude-desktop-integration)
 798    - [Direct Execution](#direct-execution)
 799    - [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server)
 800  - [Examples](#examples)
 801    - [Echo Server](#echo-server)
 802    - [SQLite Explorer](#sqlite-explorer)
 803  - [Advanced Usage](#advanced-usage)
 804    - [Low-Level Server](#low-level-server)
 805    - [Writing MCP Clients](#writing-mcp-clients)
 806    - [MCP Primitives](#mcp-primitives)
 807    - [Server Capabilities](#server-capabilities)
 808  - [Documentation](#documentation)
 809  - [Contributing](#contributing)
 810  - [License](#license)
 811
 812[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
 813[pypi-url]: https://pypi.org/project/mcp/
 814[mit-badge]: https://img.shields.io/pypi/l/mcp.svg
 815[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
 816[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
 817[python-url]: https://www.python.org/downloads/
 818[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
 819[docs-url]: https://modelcontextprotocol.io
 820[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
 821[spec-url]: https://spec.modelcontextprotocol.io
 822[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
 823[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
 824
 825## Overview
 826
 827The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
 828
 829- Build MCP clients that can connect to any MCP server
 830- Create MCP servers that expose resources, prompts and tools
 831- Use standard transports like stdio and SSE
 832- Handle all MCP protocol messages and lifecycle events
 833
 834## Installation
 835
 836### Adding MCP to your python project
 837
 838We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects.
 839
 840If you haven't created a uv-managed project yet, create one:
 841
 842   ```bash
 843   uv init mcp-server-demo
 844   cd mcp-server-demo
 845   ```
 846
 847   Then add MCP to your project dependencies:
 848
 849   ```bash
 850   uv add "mcp[cli]"
 851   ```
 852
 853Alternatively, for projects using pip for dependencies:
 854```bash
 855pip install "mcp[cli]"
 856```
 857
 858### Running the standalone MCP development tools
 859
 860To run the mcp command with uv:
 861
 862```bash
 863uv run mcp
 864```
 865
 866## Quickstart
 867
 868Let's create a simple MCP server that exposes a calculator tool and some data:
 869
 870```python
 871# server.py
 872from mcp.server.fastmcp import FastMCP
 873
 874# Create an MCP server
 875mcp = FastMCP("Demo")
 876
 877
 878# Add an addition tool
 879@mcp.tool()
 880def add(a: int, b: int) -> int:
 881    """Add two numbers"""
 882    return a + b
 883
 884
 885# Add a dynamic greeting resource
 886@mcp.resource("greeting://{name}")
 887def get_greeting(name: str) -> str:
 888    """Get a personalized greeting"""
 889    return f"Hello, {name}!"
 890```
 891
 892You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
 893```bash
 894mcp install server.py
 895```
 896
 897Alternatively, you can test it with the MCP Inspector:
 898```bash
 899mcp dev server.py
 900```
 901
 902## What is MCP?
 903
 904The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
 905
 906- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
 907- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
 908- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
 909- And more!
 910
 911## Core Concepts
 912
 913### Server
 914
 915The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
 916
 917```python
 918# Add lifespan support for startup/shutdown with strong typing
 919from contextlib import asynccontextmanager
 920from collections.abc import AsyncIterator
 921from dataclasses import dataclass
 922
 923from fake_database import Database  # Replace with your actual DB type
 924
 925from mcp.server.fastmcp import Context, FastMCP
 926
 927# Create a named server
 928mcp = FastMCP("My App")
 929
 930# Specify dependencies for deployment and development
 931mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
 932
 933
 934@dataclass
 935class AppContext:
 936    db: Database
 937
 938
 939@asynccontextmanager
 940async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
 941    """Manage application lifecycle with type-safe context"""
 942    # Initialize on startup
 943    db = await Database.connect()
 944    try:
 945        yield AppContext(db=db)
 946    finally:
 947        # Cleanup on shutdown
 948        await db.disconnect()
 949
 950
 951# Pass lifespan to server
 952mcp = FastMCP("My App", lifespan=app_lifespan)
 953
 954
 955# Access type-safe lifespan context in tools
 956@mcp.tool()
 957def query_db(ctx: Context) -> str:
 958    """Tool that uses initialized resources"""
 959    db = ctx.request_context.lifespan_context["db"]
 960    return db.query()
 961```
 962
 963### Resources
 964
 965Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
 966
 967```python
 968from mcp.server.fastmcp import FastMCP
 969
 970mcp = FastMCP("My App")
 971
 972
 973@mcp.resource("config://app")
 974def get_config() -> str:
 975    """Static configuration data"""
 976    return "App configuration here"
 977
 978
 979@mcp.resource("users://{user_id}/profile")
 980def get_user_profile(user_id: str) -> str:
 981    """Dynamic user data"""
 982    return f"Profile data for user {user_id}"
 983```
 984
 985### Tools
 986
 987Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
 988
 989```python
 990import httpx
 991from mcp.server.fastmcp import FastMCP
 992
 993mcp = FastMCP("My App")
 994
 995
 996@mcp.tool()
 997def calculate_bmi(weight_kg: float, height_m: float) -> float:
 998    """Calculate BMI given weight in kg and height in meters"""
 999    return weight_kg / (height_m**2)
1000
1001
1002@mcp.tool()
1003async def fetch_weather(city: str) -> str:
1004    """Fetch current weather for a city"""
1005    async with httpx.AsyncClient() as client:
1006        response = await client.get(f"https://api.weather.com/{city}")
1007        return response.text
1008```
1009
1010### Prompts
1011
1012Prompts are reusable templates that help LLMs interact with your server effectively:
1013
1014```python
1015from mcp.server.fastmcp import FastMCP
1016from mcp.server.fastmcp.prompts import base
1017
1018mcp = FastMCP("My App")
1019
1020
1021@mcp.prompt()
1022def review_code(code: str) -> str:
1023    return f"Please review this code:\n\n{code}"
1024
1025
1026@mcp.prompt()
1027def debug_error(error: str) -> list[base.Message]:
1028    return [
1029        base.UserMessage("I'm seeing this error:"),
1030        base.UserMessage(error),
1031        base.AssistantMessage("I'll help debug that. What have you tried so far?"),
1032    ]
1033```
1034
1035### Images
1036
1037FastMCP provides an `Image` class that automatically handles image data:
1038
1039```python
1040from mcp.server.fastmcp import FastMCP, Image
1041from PIL import Image as PILImage
1042
1043mcp = FastMCP("My App")
1044
1045
1046@mcp.tool()
1047def create_thumbnail(image_path: str) -> Image:
1048    """Create a thumbnail from an image"""
1049    img = PILImage.open(image_path)
1050    img.thumbnail((100, 100))
1051    return Image(data=img.tobytes(), format="png")
1052```
1053
1054### Context
1055
1056The Context object gives your tools and resources access to MCP capabilities:
1057
1058```python
1059from mcp.server.fastmcp import FastMCP, Context
1060
1061mcp = FastMCP("My App")
1062
1063
1064@mcp.tool()
1065async def long_task(files: list[str], ctx: Context) -> str:
1066    """Process multiple files with progress tracking"""
1067    for i, file in enumerate(files):
1068        ctx.info(f"Processing {file}")
1069        await ctx.report_progress(i, len(files))
1070        data, mime_type = await ctx.read_resource(f"file://{file}")
1071    return "Processing complete"
1072```
1073
1074## Running Your Server
1075
1076### Development Mode
1077
1078The fastest way to test and debug your server is with the MCP Inspector:
1079
1080```bash
1081mcp dev server.py
1082
1083# Add dependencies
1084mcp dev server.py --with pandas --with numpy
1085
1086# Mount local code
1087mcp dev server.py --with-editable .
1088```
1089
1090### Claude Desktop Integration
1091
1092Once your server is ready, install it in Claude Desktop:
1093
1094```bash
1095mcp install server.py
1096
1097# Custom name
1098mcp install server.py --name "My Analytics Server"
1099
1100# Environment variables
1101mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
1102mcp install server.py -f .env
1103```
1104
1105### Direct Execution
1106
1107For advanced scenarios like custom deployments:
1108
1109```python
1110from mcp.server.fastmcp import FastMCP
1111
1112mcp = FastMCP("My App")
1113
1114if __name__ == "__main__":
1115    mcp.run()
1116```
1117
1118Run it with:
1119```bash
1120python server.py
1121# or
1122mcp run server.py
1123```
1124
1125### Mounting to an Existing ASGI Server
1126
1127You can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications.
1128
1129```python
1130from starlette.applications import Starlette
1131from starlette.routing import Mount, Host
1132from mcp.server.fastmcp import FastMCP
1133
1134
1135mcp = FastMCP("My App")
1136
1137# Mount the SSE server to the existing ASGI server
1138app = Starlette(
1139    routes=[
1140        Mount('/', app=mcp.sse_app()),
1141    ]
1142)
1143
1144# or dynamically mount as host
1145app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))
1146```
1147
1148For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).
1149
1150## Examples
1151
1152### Echo Server
1153
1154A simple server demonstrating resources, tools, and prompts:
1155
1156```python
1157from mcp.server.fastmcp import FastMCP
1158
1159mcp = FastMCP("Echo")
1160
1161
1162@mcp.resource("echo://{message}")
1163def echo_resource(message: str) -> str:
1164    """Echo a message as a resource"""
1165    return f"Resource echo: {message}"
1166
1167
1168@mcp.tool()
1169def echo_tool(message: str) -> str:
1170    """Echo a message as a tool"""
1171    return f"Tool echo: {message}"
1172
1173
1174@mcp.prompt()
1175def echo_prompt(message: str) -> str:
1176    """Create an echo prompt"""
1177    return f"Please process this message: {message}"
1178```
1179
1180### SQLite Explorer
1181
1182A more complex example showing database integration:
1183
1184```python
1185import sqlite3
1186
1187from mcp.server.fastmcp import FastMCP
1188
1189mcp = FastMCP("SQLite Explorer")
1190
1191
1192@mcp.resource("schema://main")
1193def get_schema() -> str:
1194    """Provide the database schema as a resource"""
1195    conn = sqlite3.connect("database.db")
1196    schema = conn.execute("SELECT sql FROM sqlite_master WHERE type='table'").fetchall()
1197    return "\n".join(sql[0] for sql in schema if sql[0])
1198
1199
1200@mcp.tool()
1201def query_data(sql: str) -> str:
1202    """Execute SQL queries safely"""
1203    conn = sqlite3.connect("database.db")
1204    try:
1205        result = conn.execute(sql).fetchall()
1206        return "\n".join(str(row) for row in result)
1207    except Exception as e:
1208        return f"Error: {str(e)}"
1209```
1210
1211## Advanced Usage
1212
1213### Low-Level Server
1214
1215For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
1216
1217```python
1218from contextlib import asynccontextmanager
1219from collections.abc import AsyncIterator
1220
1221from fake_database import Database  # Replace with your actual DB type
1222
1223from mcp.server import Server
1224
1225
1226@asynccontextmanager
1227async def server_lifespan(server: Server) -> AsyncIterator[dict]:
1228    """Manage server startup and shutdown lifecycle."""
1229    # Initialize resources on startup
1230    db = await Database.connect()
1231    try:
1232        yield {"db": db}
1233    finally:
1234        # Clean up on shutdown
1235        await db.disconnect()
1236
1237
1238# Pass lifespan to server
1239server = Server("example-server", lifespan=server_lifespan)
1240
1241
1242# Access lifespan context in handlers
1243@server.call_tool()
1244async def query_db(name: str, arguments: dict) -> list:
1245    ctx = server.request_context
1246    db = ctx.lifespan_context["db"]
1247    return await db.query(arguments["query"])
1248```
1249
1250The lifespan API provides:
1251- A way to initialize resources when the server starts and clean them up when it stops
1252- Access to initialized resources through the request context in handlers
1253- Type-safe context passing between lifespan and request handlers
1254
1255```python
1256import mcp.server.stdio
1257import mcp.types as types
1258from mcp.server.lowlevel import NotificationOptions, Server
1259from mcp.server.models import InitializationOptions
1260
1261# Create a server instance
1262server = Server("example-server")
1263
1264
1265@server.list_prompts()
1266async def handle_list_prompts() -> list[types.Prompt]:
1267    return [
1268        types.Prompt(
1269            name="example-prompt",
1270            description="An example prompt template",
1271            arguments=[
1272                types.PromptArgument(
1273                    name="arg1", description="Example argument", required=True
1274                )
1275            ],
1276        )
1277    ]
1278
1279
1280@server.get_prompt()
1281async def handle_get_prompt(
1282    name: str, arguments: dict[str, str] | None
1283) -> types.GetPromptResult:
1284    if name != "example-prompt":
1285        raise ValueError(f"Unknown prompt: {name}")
1286
1287    return types.GetPromptResult(
1288        description="Example prompt",
1289        messages=[
1290            types.PromptMessage(
1291                role="user",
1292                content=types.TextContent(type="text", text="Example prompt text"),
1293            )
1294        ],
1295    )
1296
1297
1298async def run():
1299    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
1300        await server.run(
1301            read_stream,
1302            write_stream,
1303            InitializationOptions(
1304                server_name="example",
1305                server_version="0.1.0",
1306                capabilities=server.get_capabilities(
1307                    notification_options=NotificationOptions(),
1308                    experimental_capabilities={},
1309                ),
1310            ),
1311        )
1312
1313
1314if __name__ == "__main__":
1315    import asyncio
1316
1317    asyncio.run(run())
1318```
1319
1320### Writing MCP Clients
1321
1322The SDK provides a high-level client interface for connecting to MCP servers:
1323
1324```python
1325from mcp import ClientSession, StdioServerParameters, types
1326from mcp.client.stdio import stdio_client
1327
1328# Create server parameters for stdio connection
1329server_params = StdioServerParameters(
1330    command="python",  # Executable
1331    args=["example_server.py"],  # Optional command line arguments
1332    env=None,  # Optional environment variables
1333)
1334
1335
1336# Optional: create a sampling callback
1337async def handle_sampling_message(
1338    message: types.CreateMessageRequestParams,
1339) -> types.CreateMessageResult:
1340    return types.CreateMessageResult(
1341        role="assistant",
1342        content=types.TextContent(
1343            type="text",
1344            text="Hello, world! from model",
1345        ),
1346        model="gpt-3.5-turbo",
1347        stopReason="endTurn",
1348    )
1349
1350
1351async def run():
1352    async with stdio_client(server_params) as (read, write):
1353        async with ClientSession(
1354            read, write, sampling_callback=handle_sampling_message
1355        ) as session:
1356            # Initialize the connection
1357            await session.initialize()
1358
1359            # List available prompts
1360            prompts = await session.list_prompts()
1361
1362            # Get a prompt
1363            prompt = await session.get_prompt(
1364                "example-prompt", arguments={"arg1": "value"}
1365            )
1366
1367            # List available resources
1368            resources = await session.list_resources()
1369
1370            # List available tools
1371            tools = await session.list_tools()
1372
1373            # Read a resource
1374            content, mime_type = await session.read_resource("file://some/path")
1375
1376            # Call a tool
1377            result = await session.call_tool("tool-name", arguments={"arg1": "value"})
1378
1379
1380if __name__ == "__main__":
1381    import asyncio
1382
1383    asyncio.run(run())
1384```
1385
1386### MCP Primitives
1387
1388The MCP protocol defines three core primitives that servers can implement:
1389
1390| Primitive | Control               | Description                                         | Example Use                  |
1391|-----------|-----------------------|-----------------------------------------------------|------------------------------|
1392| Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
1393| Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
1394| Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |
1395
1396### Server Capabilities
1397
1398MCP servers declare capabilities during initialization:
1399
1400| Capability  | Feature Flag                 | Description                        |
1401|-------------|------------------------------|------------------------------------|
1402| `prompts`   | `listChanged`                | Prompt template management         |
1403| `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
1404| `tools`     | `listChanged`                | Tool discovery and execution       |
1405| `logging`   | -                            | Server logging configuration       |
1406| `completion`| -                            | Argument completion suggestions    |
1407
1408## Documentation
1409
1410- [Model Context Protocol documentation](https://modelcontextprotocol.io)
1411- [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
1412- [Officially supported servers](https://github.com/modelcontextprotocol/servers)
1413
1414## Contributing
1415
1416We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
1417
1418## License
1419
1420This project is licensed under the MIT License - see the LICENSE file for details.
1421
1422
1423MCP Python SDK example of an MCP client:
1424```py
1425import asyncio
1426import json
1427import logging
1428import os
1429import shutil
1430from contextlib import AsyncExitStack
1431from typing import Any
1432
1433import httpx
1434from dotenv import load_dotenv
1435from mcp import ClientSession, StdioServerParameters
1436from mcp.client.stdio import stdio_client
1437
1438# Configure logging
1439logging.basicConfig(
1440    level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
1441)
1442
1443
1444class Configuration:
1445    """Manages configuration and environment variables for the MCP client."""
1446
1447    def __init__(self) -> None:
1448        """Initialize configuration with environment variables."""
1449        self.load_env()
1450        self.api_key = os.getenv("LLM_API_KEY")
1451
1452    @staticmethod
1453    def load_env() -> None:
1454        """Load environment variables from .env file."""
1455        load_dotenv()
1456
1457    @staticmethod
1458    def load_config(file_path: str) -> dict[str, Any]:
1459        """Load server configuration from JSON file.
1460
1461        Args:
1462            file_path: Path to the JSON configuration file.
1463
1464        Returns:
1465            Dict containing server configuration.
1466
1467        Raises:
1468            FileNotFoundError: If configuration file doesn't exist.
1469            JSONDecodeError: If configuration file is invalid JSON.
1470        """
1471        with open(file_path, "r") as f:
1472            return json.load(f)
1473
1474    @property
1475    def llm_api_key(self) -> str:
1476        """Get the LLM API key.
1477
1478        Returns:
1479            The API key as a string.
1480
1481        Raises:
1482            ValueError: If the API key is not found in environment variables.
1483        """
1484        if not self.api_key:
1485            raise ValueError("LLM_API_KEY not found in environment variables")
1486        return self.api_key
1487
1488
1489class Server:
1490    """Manages MCP server connections and tool execution."""
1491
1492    def __init__(self, name: str, config: dict[str, Any]) -> None:
1493        self.name: str = name
1494        self.config: dict[str, Any] = config
1495        self.stdio_context: Any | None = None
1496        self.session: ClientSession | None = None
1497        self._cleanup_lock: asyncio.Lock = asyncio.Lock()
1498        self.exit_stack: AsyncExitStack = AsyncExitStack()
1499
1500    async def initialize(self) -> None:
1501        """Initialize the server connection."""
1502        command = (
1503            shutil.which("npx")
1504            if self.config["command"] == "npx"
1505            else self.config["command"]
1506        )
1507        if command is None:
1508            raise ValueError("The command must be a valid string and cannot be None.")
1509
1510        server_params = StdioServerParameters(
1511            command=command,
1512            args=self.config["args"],
1513            env={**os.environ, **self.config["env"]}
1514            if self.config.get("env")
1515            else None,
1516        )
1517        try:
1518            stdio_transport = await self.exit_stack.enter_async_context(
1519                stdio_client(server_params)
1520            )
1521            read, write = stdio_transport
1522            session = await self.exit_stack.enter_async_context(
1523                ClientSession(read, write)
1524            )
1525            await session.initialize()
1526            self.session = session
1527        except Exception as e:
1528            logging.error(f"Error initializing server {self.name}: {e}")
1529            await self.cleanup()
1530            raise
1531
1532    async def list_tools(self) -> list[Any]:
1533        """List available tools from the server.
1534
1535        Returns:
1536            A list of available tools.
1537
1538        Raises:
1539            RuntimeError: If the server is not initialized.
1540        """
1541        if not self.session:
1542            raise RuntimeError(f"Server {self.name} not initialized")
1543
1544        tools_response = await self.session.list_tools()
1545        tools = []
1546
1547        for item in tools_response:
1548            if isinstance(item, tuple) and item[0] == "tools":
1549                for tool in item[1]:
1550                    tools.append(Tool(tool.name, tool.description, tool.inputSchema))
1551
1552        return tools
1553
1554    async def execute_tool(
1555        self,
1556        tool_name: str,
1557        arguments: dict[str, Any],
1558        retries: int = 2,
1559        delay: float = 1.0,
1560    ) -> Any:
1561        """Execute a tool with retry mechanism.
1562
1563        Args:
1564            tool_name: Name of the tool to execute.
1565            arguments: Tool arguments.
1566            retries: Number of retry attempts.
1567            delay: Delay between retries in seconds.
1568
1569        Returns:
1570            Tool execution result.
1571
1572        Raises:
1573            RuntimeError: If server is not initialized.
1574            Exception: If tool execution fails after all retries.
1575        """
1576        if not self.session:
1577            raise RuntimeError(f"Server {self.name} not initialized")
1578
1579        attempt = 0
1580        while attempt < retries:
1581            try:
1582                logging.info(f"Executing {tool_name}...")
1583                result = await self.session.call_tool(tool_name, arguments)
1584
1585                return result
1586
1587            except Exception as e:
1588                attempt += 1
1589                logging.warning(
1590                    f"Error executing tool: {e}. Attempt {attempt} of {retries}."
1591                )
1592                if attempt < retries:
1593                    logging.info(f"Retrying in {delay} seconds...")
1594                    await asyncio.sleep(delay)
1595                else:
1596                    logging.error("Max retries reached. Failing.")
1597                    raise
1598
1599    async def cleanup(self) -> None:
1600        """Clean up server resources."""
1601        async with self._cleanup_lock:
1602            try:
1603                await self.exit_stack.aclose()
1604                self.session = None
1605                self.stdio_context = None
1606            except Exception as e:
1607                logging.error(f"Error during cleanup of server {self.name}: {e}")
1608
1609
1610class Tool:
1611    """Represents a tool with its properties and formatting."""
1612
1613    def __init__(
1614        self, name: str, description: str, input_schema: dict[str, Any]
1615    ) -> None:
1616        self.name: str = name
1617        self.description: str = description
1618        self.input_schema: dict[str, Any] = input_schema
1619
1620    def format_for_llm(self) -> str:
1621        """Format tool information for LLM.
1622
1623        Returns:
1624            A formatted string describing the tool.
1625        """
1626        args_desc = []
1627        if "properties" in self.input_schema:
1628            for param_name, param_info in self.input_schema["properties"].items():
1629                arg_desc = (
1630                    f"- {param_name}: {param_info.get('description', 'No description')}"
1631                )
1632                if param_name in self.input_schema.get("required", []):
1633                    arg_desc += " (required)"
1634                args_desc.append(arg_desc)
1635
1636        return f"""
1637Tool: {self.name}
1638Description: {self.description}
1639Arguments:
1640{chr(10).join(args_desc)}
1641"""
1642
1643
1644class LLMClient:
1645    """Manages communication with the LLM provider."""
1646
1647    def __init__(self, api_key: str) -> None:
1648        self.api_key: str = api_key
1649
1650    def get_response(self, messages: list[dict[str, str]]) -> str:
1651        """Get a response from the LLM.
1652
1653        Args:
1654            messages: A list of message dictionaries.
1655
1656        Returns:
1657            The LLM's response as a string.
1658
1659        Raises:
1660            httpx.RequestError: If the request to the LLM fails.
1661        """
1662        url = "https://api.groq.com/openai/v1/chat/completions"
1663
1664        headers = {
1665            "Content-Type": "application/json",
1666            "Authorization": f"Bearer {self.api_key}",
1667        }
1668        payload = {
1669            "messages": messages,
1670            "model": "llama-3.2-90b-vision-preview",
1671            "temperature": 0.7,
1672            "max_tokens": 4096,
1673            "top_p": 1,
1674            "stream": False,
1675            "stop": None,
1676        }
1677
1678        try:
1679            with httpx.Client() as client:
1680                response = client.post(url, headers=headers, json=payload)
1681                response.raise_for_status()
1682                data = response.json()
1683                return data["choices"][0]["message"]["content"]
1684
1685        except httpx.RequestError as e:
1686            error_message = f"Error getting LLM response: {str(e)}"
1687            logging.error(error_message)
1688
1689            if isinstance(e, httpx.HTTPStatusError):
1690                status_code = e.response.status_code
1691                logging.error(f"Status code: {status_code}")
1692                logging.error(f"Response details: {e.response.text}")
1693
1694            return (
1695                f"I encountered an error: {error_message}. "
1696                "Please try again or rephrase your request."
1697            )
1698
1699
1700class ChatSession:
1701    """Orchestrates the interaction between user, LLM, and tools."""
1702
1703    def __init__(self, servers: list[Server], llm_client: LLMClient) -> None:
1704        self.servers: list[Server] = servers
1705        self.llm_client: LLMClient = llm_client
1706
1707    async def cleanup_servers(self) -> None:
1708        """Clean up all servers properly."""
1709        cleanup_tasks = []
1710        for server in self.servers:
1711            cleanup_tasks.append(asyncio.create_task(server.cleanup()))
1712
1713        if cleanup_tasks:
1714            try:
1715                await asyncio.gather(*cleanup_tasks, return_exceptions=True)
1716            except Exception as e:
1717                logging.warning(f"Warning during final cleanup: {e}")
1718
1719    async def process_llm_response(self, llm_response: str) -> str:
1720        """Process the LLM response and execute tools if needed.
1721
1722        Args:
1723            llm_response: The response from the LLM.
1724
1725        Returns:
1726            The result of tool execution or the original response.
1727        """
1728        import json
1729
1730        try:
1731            tool_call = json.loads(llm_response)
1732            if "tool" in tool_call and "arguments" in tool_call:
1733                logging.info(f"Executing tool: {tool_call['tool']}")
1734                logging.info(f"With arguments: {tool_call['arguments']}")
1735
1736                for server in self.servers:
1737                    tools = await server.list_tools()
1738                    if any(tool.name == tool_call["tool"] for tool in tools):
1739                        try:
1740                            result = await server.execute_tool(
1741                                tool_call["tool"], tool_call["arguments"]
1742                            )
1743
1744                            if isinstance(result, dict) and "progress" in result:
1745                                progress = result["progress"]
1746                                total = result["total"]
1747                                percentage = (progress / total) * 100
1748                                logging.info(
1749                                    f"Progress: {progress}/{total} "
1750                                    f"({percentage:.1f}%)"
1751                                )
1752
1753                            return f"Tool execution result: {result}"
1754                        except Exception as e:
1755                            error_msg = f"Error executing tool: {str(e)}"
1756                            logging.error(error_msg)
1757                            return error_msg
1758
1759                return f"No server found with tool: {tool_call['tool']}"
1760            return llm_response
1761        except json.JSONDecodeError:
1762            return llm_response
1763
1764    async def start(self) -> None:
1765        """Main chat session handler."""
1766        try:
1767            for server in self.servers:
1768                try:
1769                    await server.initialize()
1770                except Exception as e:
1771                    logging.error(f"Failed to initialize server: {e}")
1772                    await self.cleanup_servers()
1773                    return
1774
1775            all_tools = []
1776            for server in self.servers:
1777                tools = await server.list_tools()
1778                all_tools.extend(tools)
1779
1780            tools_description = "\n".join([tool.format_for_llm() for tool in all_tools])
1781
1782            system_message = (
1783                "You are a helpful assistant with access to these tools:\n\n"
1784                f"{tools_description}\n"
1785                "Choose the appropriate tool based on the user's question. "
1786                "If no tool is needed, reply directly.\n\n"
1787                "IMPORTANT: When you need to use a tool, you must ONLY respond with "
1788                "the exact JSON object format below, nothing else:\n"
1789                "{\n"
1790                '    "tool": "tool-name",\n'
1791                '    "arguments": {\n'
1792                '        "argument-name": "value"\n'
1793                "    }\n"
1794                "}\n\n"
1795                "After receiving a tool's response:\n"
1796                "1. Transform the raw data into a natural, conversational response\n"
1797                "2. Keep responses concise but informative\n"
1798                "3. Focus on the most relevant information\n"
1799                "4. Use appropriate context from the user's question\n"
1800                "5. Avoid simply repeating the raw data\n\n"
1801                "Please use only the tools that are explicitly defined above."
1802            )
1803
1804            messages = [{"role": "system", "content": system_message}]
1805
1806            while True:
1807                try:
1808                    user_input = input("You: ").strip().lower()
1809                    if user_input in ["quit", "exit"]:
1810                        logging.info("\nExiting...")
1811                        break
1812
1813                    messages.append({"role": "user", "content": user_input})
1814
1815                    llm_response = self.llm_client.get_response(messages)
1816                    logging.info("\nAssistant: %s", llm_response)
1817
1818                    result = await self.process_llm_response(llm_response)
1819
1820                    if result != llm_response:
1821                        messages.append({"role": "assistant", "content": llm_response})
1822                        messages.append({"role": "system", "content": result})
1823
1824                        final_response = self.llm_client.get_response(messages)
1825                        logging.info("\nFinal response: %s", final_response)
1826                        messages.append(
1827                            {"role": "assistant", "content": final_response}
1828                        )
1829                    else:
1830                        messages.append({"role": "assistant", "content": llm_response})
1831
1832                except KeyboardInterrupt:
1833                    logging.info("\nExiting...")
1834                    break
1835
1836        finally:
1837            await self.cleanup_servers()
1838
1839
1840async def main() -> None:
1841    """Initialize and run the chat session."""
1842    config = Configuration()
1843    server_config = config.load_config("servers_config.json")
1844    servers = [
1845        Server(name, srv_config)
1846        for name, srv_config in server_config["mcpServers"].items()
1847    ]
1848    llm_client = LLMClient(config.llm_api_key)
1849    chat_session = ChatSession(servers, llm_client)
1850    await chat_session.start()
1851
1852
1853if __name__ == "__main__":
1854    asyncio.run(main())
1855```
1856
1857
1858
1859
1860JSON schema for Claude Code tools available via MCP:
1861```json
1862{
1863    "jsonrpc": "2.0",
1864    "id": 1,
1865    "result": {
1866        "tools": [
1867            {
1868                "name": "dispatch_agent",
1869                "description": "Launch a new task",
1870                "inputSchema": {
1871                    "type": "object",
1872                    "properties": {
1873                        "prompt": {
1874                            "type": "string",
1875                            "description": "The task for the agent to perform"
1876                        }
1877                    },
1878                    "required": [
1879                        "prompt"
1880                    ],
1881                    "additionalProperties": false,
1882                    "$schema": "http://json-schema.org/draft-07/schema#"
1883                }
1884            },
1885            {
1886                "name": "Bash",
1887                "description": "Run shell command",
1888                "inputSchema": {
1889                    "type": "object",
1890                    "properties": {
1891                        "command": {
1892                            "type": "string",
1893                            "description": "The command to execute"
1894                        },
1895                        "timeout": {
1896                            "type": "number",
1897                            "description": "Optional timeout in milliseconds (max 600000)"
1898                        },
1899                        "description": {
1900                            "type": "string",
1901                            "description": " Clear, concise description of what this command does in 5-10 words. Examples:\nInput: ls\nOutput: Lists files in current directory\n\nInput: git status\nOutput: Shows working tree status\n\nInput: npm install\nOutput: Installs package dependencies\n\nInput: mkdir foo\nOutput: Creates directory 'foo'"
1902                        }
1903                    },
1904                    "required": [
1905                        "command"
1906                    ],
1907                    "additionalProperties": false,
1908                    "$schema": "http://json-schema.org/draft-07/schema#"
1909                }
1910            },
1911            {
1912                "name": "BatchTool",
1913                "description": "\n- Batch execution tool that runs multiple tool invocations in a single request\n- Tools are executed in parallel when possible, and otherwise serially\n- Takes a list of tool invocations (tool_name and input pairs)\n- Returns the collected results from all invocations\n- Use this tool when you need to run multiple independent tool operations at once -- it is awesome for speeding up your workflow, reducing both context usage and latency\n- Each tool will respect its own permissions and validation rules\n- The tool's outputs are NOT shown to the user; to answer the user's query, you MUST send a message with the results after the tool call completes, otherwise the user will not see the results\n\nAvailable tools:\nTool: dispatch_agent\nArguments: prompt: string \"The task for the agent to perform\"\nUsage: Launch a new agent that has access to the following tools: View, GlobTool, GrepTool, LS, ReadNotebook, WebFetchTool. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries, use the Agent tool to perform the search for you.\n\nWhen to use the Agent tool:\n- If you are searching for a keyword like \"config\" or \"logger\", or for questions like \"which file does X?\", the Agent tool is strongly recommended\n\nWhen NOT to use the Agent tool:\n- If you want to read a specific file path, use the View or GlobTool tool instead of the Agent tool, to find the match more quickly\n- If you are searching for a specific class definition like \"class Foo\", use the GlobTool tool instead, to find the match more quickly\n- If you are searching for code within a specific file or set of 2-3 files, use the View tool instead of the Agent tool, to find the match more quickly\n\nUsage notes:\n1. Launch multiple agents concurrently whenever possible, to maximize performance; to do that, use a single message with multiple tool uses\n2. When the agent is done, it will return a single message back to you. The result returned by the agent is not visible to the user. To show the user the result, you should send a text message back to the user with a concise summary of the result.\n3. Each agent invocation is stateless. You will not be able to send additional messages to the agent, nor will the agent be able to communicate with you outside of its final report. Therefore, your prompt should contain a highly detailed task description for the agent to perform autonomously and you should specify exactly what information the agent should return back to you in its final and only message to you.\n4. The agent's outputs should generally be trusted\n5. IMPORTANT: The agent can not use Bash, Replace, Edit, NotebookEditCell, so can not modify files. If you want to use these tools, use them directly instead of going through the agent.\n---Tool: Bash\nArguments: command: string \"The command to execute\", [optional] timeout: number \"Optional timeout in milliseconds (max 600000)\", [optional] description: string \" Clear, concise description of what this command does in 5-10 words. Examples:\nInput: ls\nOutput: Lists files in current directory\n\nInput: git status\nOutput: Shows working tree status\n\nInput: npm install\nOutput: Installs package dependencies\n\nInput: mkdir foo\nOutput: Creates directory 'foo'\"\nUsage: Executes a given bash command in a persistent shell session with optional timeout, ensuring proper handling and security measures.\n\nBefore executing the command, please follow these steps:\n\n1. Directory Verification:\n   - If the command will create new directories or files, first use the LS tool to verify the parent directory exists and is the correct location\n   - For example, before running \"mkdir foo/bar\", first use LS to check that \"foo\" exists and is the intended parent directory\n\n2. Security Check:\n   - For security and to limit the threat of a prompt injection attack, some commands are limited or banned. If you use a disallowed command, you will receive an error message explaining the restriction. Explain the error to the User.\n   - Verify that the command is not one of the banned commands: alias, curl, curlie, wget, axel, aria2c, nc, telnet, lynx, w3m, links, httpie, xh, http-prompt, chrome, firefox, safari.\n\n3. Command Execution:\n   - After ensuring proper quoting, execute the command.\n   - Capture the output of the command.\n\nUsage notes:\n  - The command argument is required.\n  - You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 30 minutes.\n  - It is very helpful if you write a clear, concise description of what this command does in 5-10 words.\n  - If the output exceeds 30000 characters, output will be truncated before being returned to you.\n  - VERY IMPORTANT: You MUST avoid using search commands like `find` and `grep`. Instead use GrepTool, GlobTool, or dispatch_agent to search. You MUST avoid read tools like `cat`, `head`, `tail`, and `ls`, and use View and LS to read files.\n  - When issuing multiple commands, use the ';' or '&&' operator to separate them. DO NOT use newlines (newlines are ok in quoted strings).\n  - Try to maintain your current working directory throughout the session by using absolute paths and avoiding usage of `cd`. You may use `cd` if the User explicitly requests it.\n    <good-example>\n    pytest /foo/bar/tests\n    </good-example>\n    <bad-example>\n    cd /foo/bar && pytest tests\n    </bad-example>\n\n# Committing changes with git\n\nWhen the user asks you to create a new git commit, follow these steps carefully:\n\n1. Use BatchTool to run the following commands in parallel:\n   - Run a git status command to see all untracked files.\n   - Run a git diff command to see both staged and unstaged changes that will be committed.\n   - Run a git log command to see recent commit messages, so that you can follow this repository's commit message style.\n\n2. Analyze all staged changes (both previously staged and newly added) and draft a commit message. Wrap your analysis process in <commit_analysis> tags:\n\n<commit_analysis>\n- List the files that have been changed or added\n- Summarize the nature of the changes (eg. new feature, enhancement to an existing feature, bug fix, refactoring, test, docs, etc.)\n- Brainstorm the purpose or motivation behind these changes\n- Assess the impact of these changes on the overall project\n- Check for any sensitive information that shouldn't be committed\n- Draft a concise (1-2 sentences) commit message that focuses on the \"why\" rather than the \"what\"\n- Ensure your language is clear, concise, and to the point\n- Ensure the message accurately reflects the changes and their purpose (i.e. \"add\" means a wholly new feature, \"update\" means an enhancement to an existing feature, \"fix\" means a bug fix, etc.)\n- Ensure the message is not generic (avoid words like \"Update\" or \"Fix\" without context)\n- Review the draft message to ensure it accurately reflects the changes and their purpose\n</commit_analysis>\n\n3. Use BatchTool to run the following commands in parallel:\n   - Add relevant untracked files to the staging area.\n   - Create the commit with a message ending with:\n   🤖 Generated with [Claude Code](https://claude.ai/code)\n\n   Co-Authored-By: Claude <noreply@anthropic.com>\n   - Run git status to make sure the commit succeeded.\n\n4. If the commit fails due to pre-commit hook changes, retry the commit ONCE to include these automated changes. If it fails again, it usually means a pre-commit hook is preventing the commit. If the commit succeeds but you notice that files were modified by the pre-commit hook, you MUST amend your commit to include them.\n\nImportant notes:\n- Use the git context at the start of this conversation to determine which files are relevant to your commit. Be careful not to stage and commit files (e.g. with `git add .`) that aren't relevant to your commit.\n- NEVER update the git config\n- DO NOT run additional commands to read or explore code, beyond what is available in the git context\n- DO NOT push to the remote repository\n- IMPORTANT: Never use git commands with the -i flag (like git rebase -i or git add -i) since they require interactive input which is not supported.\n- If there are no changes to commit (i.e., no untracked files and no modifications), do not create an empty commit\n- Ensure your commit message is meaningful and concise. It should explain the purpose of the changes, not just describe them.\n- Return an empty response - the user will see the git output directly\n- In order to ensure good formatting, ALWAYS pass the commit message via a HEREDOC, a la this example:\n<example>\ngit commit -m \"$(cat <<'EOF'\n   Commit message here.\n\n   🤖 Generated with [Claude Code](https://claude.ai/code)\n\n   Co-Authored-By: Claude <noreply@anthropic.com>\n   EOF\n   )\"\n</example>\n\n# Creating pull requests\nUse the gh command via the Bash tool for ALL GitHub-related tasks including working with issues, pull requests, checks, and releases. If given a Github URL use the gh command to get the information needed.\n\nIMPORTANT: When the user asks you to create a pull request, follow these steps carefully:\n\n1. Use BatchTool to run the following commands in parallel, in order to understand the current state of the branch since it diverged from the main branch:\n   - Run a git status command to see all untracked files\n   - Run a git diff command to see both staged and unstaged changes that will be committed\n   - Check if the current branch tracks a remote branch and is up to date with the remote, so you know if you need to push to the remote\n   - Run a git log command and `git diff main...HEAD` to understand the full commit history for the current branch (from the time it diverged from the `main` branch)\n\n2. Analyze all changes that will be included in the pull request, making sure to look at all relevant commits (NOT just the latest commit, but ALL commits that will be included in the pull request!!!), and draft a pull request summary. Wrap your analysis process in <pr_analysis> tags:\n\n<pr_analysis>\n- List the commits since diverging from the main branch\n- Summarize the nature of the changes (eg. new feature, enhancement to an existing feature, bug fix, refactoring, test, docs, etc.)\n- Brainstorm the purpose or motivation behind these changes\n- Assess the impact of these changes on the overall project\n- Do not use tools to explore code, beyond what is available in the git context\n- Check for any sensitive information that shouldn't be committed\n- Draft a concise (1-2 bullet points) pull request summary that focuses on the \"why\" rather than the \"what\"\n- Ensure the summary accurately reflects all changes since diverging from the main branch\n- Ensure your language is clear, concise, and to the point\n- Ensure the summary accurately reflects the changes and their purpose (ie. \"add\" means a wholly new feature, \"update\" means an enhancement to an existing feature, \"fix\" means a bug fix, etc.)\n- Ensure the summary is not generic (avoid words like \"Update\" or \"Fix\" without context)\n- Review the draft summary to ensure it accurately reflects the changes and their purpose\n</pr_analysis>\n\n3. Use BatchTool to run the following commands in parallel:\n   - Create new branch if needed\n   - Push to remote with -u flag if needed\n   - Create PR using gh pr create with the format below. Use a HEREDOC to pass the body to ensure correct formatting.\n<example>\ngh pr create --title \"the pr title\" --body \"$(cat <<'EOF'\n## Summary\n<1-3 bullet points>\n\n## Test plan\n[Checklist of TODOs for testing the pull request...]\n\n🤖 Generated with [Claude Code](https://claude.ai/code)\nEOF\n)\"\n</example>\n\nImportant:\n- NEVER update the git config\n- Return an empty response - the user will see the gh output directly\n\n# Other common operations\n- View comments on a Github PR: gh api repos/foo/bar/pulls/123/comments\n---Tool: GlobTool\nArguments: pattern: string \"The glob pattern to match files against\", [optional] path: string \"The directory to search in. If not specified, the current working directory will be used. IMPORTANT: Omit this field to use the default directory. DO NOT enter \"undefined\" or \"null\" - simply omit it for the default behavior. Must be a valid directory path if provided.\"\nUsage: - Fast file pattern matching tool that works with any codebase size\n- Supports glob patterns like \"**/*.js\" or \"src/**/*.ts\"\n- Returns matching file paths sorted by modification time\n- Use this tool when you need to find files by name patterns\n- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead\n\n---Tool: GrepTool\nArguments: pattern: string \"The regular expression pattern to search for in file contents\", [optional] path: string \"The directory to search in. Defaults to the current working directory.\", [optional] include: string \"File pattern to include in the search (e.g. \"*.js\", \"*.{ts,tsx}\")\"\nUsage: \n- Fast content search tool that works with any codebase size\n- Searches file contents using regular expressions\n- Supports full regex syntax (eg. \"log.*Error\", \"function\\s+\\w+\", etc.)\n- Filter files by pattern with the include parameter (eg. \"*.js\", \"*.{ts,tsx}\")\n- Returns matching file paths sorted by modification time\n- Use this tool when you need to find files containing specific patterns\n- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead\n\n---Tool: LS\nArguments: path: string \"The absolute path to the directory to list (must be absolute, not relative)\", [optional] ignore: array \"List of glob patterns to ignore\"\nUsage: Lists files and directories in a given path. The path parameter must be an absolute path, not a relative path. You can optionally provide an array of glob patterns to ignore with the ignore parameter. You should generally prefer the Glob and Grep tools, if you know which directories to search.\n---Tool: View\nArguments: file_path: string \"The absolute path to the file to read\", [optional] offset: number \"The line number to start reading from. Only provide if the file is too large to read at once\", [optional] limit: number \"The number of lines to read. Only provide if the file is too large to read at once.\"\nUsage: Reads a file from the local filesystem. You can access any file directly by using this tool.\nAssume this tool is able to read all files on the machine. If the User provides a path to a file assume that path is valid. It is okay to read a file that does not exist; an error will be returned.\n\nUsage:\n- The file_path parameter must be an absolute path, not a relative path\n- By default, it reads up to 2000 lines starting from the beginning of the file\n- You can optionally specify a line offset and limit (especially handy for long files), but it's recommended to read the whole file by not providing these parameters\n- Any lines longer than 2000 characters will be truncated\n- Results are returned using cat -n format, with line numbers starting at 1\n- This tool allows Claude Code to VIEW images (eg PNG, JPG, etc). When reading an image file the contents are presented visually as Claude Code is a multimodal LLM.\n- For Jupyter notebooks (.ipynb files), use the ReadNotebook instead\n- When reading multiple files, you MUST use the BatchTool tool to read them all at once\n---Tool: Edit\nArguments: file_path: string \"The absolute path to the file to modify\", old_string: string \"The text to replace\", new_string: string \"The text to replace it with\", [optional] expected_replacements: number \"The expected number of replacements to perform. Defaults to 1 if not specified.\"\nUsage: This is a tool for editing files. For moving or renaming files, you should generally use the Bash tool with the 'mv' command instead. For larger edits, use the Write tool to overwrite files. For Jupyter notebooks (.ipynb files), use the NotebookEditCell instead.\n\nBefore using this tool:\n\n1. Use the View tool to understand the file's contents and context\n\n2. Verify the directory path is correct (only applicable when creating new files):\n   - Use the LS tool to verify the parent directory exists and is the correct location\n\nTo make a file edit, provide the following:\n1. file_path: The absolute path to the file to modify (must be absolute, not relative)\n2. old_string: The text to replace (must match the file contents exactly, including all whitespace and indentation)\n3. new_string: The edited text to replace the old_string\n4. expected_replacements: The number of replacements you expect to make. Defaults to 1 if not specified.\n\nBy default, the tool will replace ONE occurrence of old_string with new_string in the specified file. If you want to replace multiple occurrences, provide the expected_replacements parameter with the exact number of occurrences you expect.\n\nCRITICAL REQUIREMENTS FOR USING THIS TOOL:\n\n1. UNIQUENESS (when expected_replacements is not specified): The old_string MUST uniquely identify the specific instance you want to change. This means:\n   - Include AT LEAST 3-5 lines of context BEFORE the change point\n   - Include AT LEAST 3-5 lines of context AFTER the change point\n   - Include all whitespace, indentation, and surrounding code exactly as it appears in the file\n\n2. EXPECTED MATCHES: If you want to replace multiple instances:\n   - Use the expected_replacements parameter with the exact number of occurrences you expect to replace\n   - This will replace ALL occurrences of the old_string with the new_string\n   - If the actual number of matches doesn't equal expected_replacements, the edit will fail\n   - This is a safety feature to prevent unintended replacements\n\n3. VERIFICATION: Before using this tool:\n   - Check how many instances of the target text exist in the file\n   - If multiple instances exist, either:\n     a) Gather enough context to uniquely identify each one and make separate calls, OR\n     b) Use expected_replacements parameter with the exact count of instances you expect to replace\n\nWARNING: If you do not follow these requirements:\n   - The tool will fail if old_string matches multiple locations and expected_replacements isn't specified\n   - The tool will fail if the number of matches doesn't equal expected_replacements when it's specified\n   - The tool will fail if old_string doesn't match exactly (including whitespace)\n   - You may change unintended instances if you don't verify the match count\n\nWhen making edits:\n   - Ensure the edit results in idiomatic, correct code\n   - Do not leave the code in a broken state\n   - Always use absolute file paths (starting with /)\n\nIf you want to create a new file, use:\n   - A new file path, including dir name if needed\n   - An empty old_string\n   - The new file's contents as new_string\n\nRemember: when making multiple file edits in a row to the same file, you should prefer to send all edits in a single message with multiple calls to this tool, rather than multiple messages with a single call each.\n\n---Tool: Replace\nArguments: file_path: string \"The absolute path to the file to write (must be absolute, not relative)\", content: string \"The content to write to the file\"\nUsage: Write a file to the local filesystem. Overwrites the existing file if there is one.\n\nBefore using this tool:\n\n1. Use the ReadFile tool to understand the file's contents and context\n\n2. Directory Verification (only applicable when creating new files):\n   - Use the LS tool to verify the parent directory exists and is the correct location\n---Tool: ReadNotebook\nArguments: notebook_path: string \"The absolute path to the Jupyter notebook file to read (must be absolute, not relative)\"\nUsage: Reads a Jupyter notebook (.ipynb file) and returns all of the cells with their outputs. Jupyter notebooks are interactive documents that combine code, text, and visualizations, commonly used for data analysis and scientific computing. The notebook_path parameter must be an absolute path, not a relative path.\n---Tool: NotebookEditCell\nArguments: notebook_path: string \"The absolute path to the Jupyter notebook file to edit (must be absolute, not relative)\", cell_number: number \"The index of the cell to edit (0-based)\", new_source: string \"The new source for the cell\", [optional] cell_type: string \"The type of the cell (code or markdown). If not specified, it defaults to the current cell type. If using edit_mode=insert, this is required.\", [optional] edit_mode: string \"The type of edit to make (replace, insert, delete). Defaults to replace.\"\nUsage: Completely replaces the contents of a specific cell in a Jupyter notebook (.ipynb file) with new source. Jupyter notebooks are interactive documents that combine code, text, and visualizations, commonly used for data analysis and scientific computing. The notebook_path parameter must be an absolute path, not a relative path. The cell_number is 0-indexed. Use edit_mode=insert to add a new cell at the index specified by cell_number. Use edit_mode=delete to delete the cell at the index specified by cell_number.\n---Tool: WebFetchTool\nArguments: url: string \"The URL to fetch content from\", prompt: string \"The prompt to run on the fetched content\"\nUsage: \n- Fetches content from a specified URL and processes it using an AI model\n- Takes a URL and a prompt as input\n- Fetches the URL content, converts HTML to markdown\n- Processes the content with the prompt using a small, fast model\n- Returns the model's response about the content\n- Use this tool when you need to retrieve and analyze web content\n\nUsage notes:\n  - IMPORTANT: If an MCP-provided web fetch tool is available, prefer using that tool instead of this one, as it may have fewer restrictions. All MCP-provided tools start with \"mcp__\".\n  - The URL must be a fully-formed valid URL\n  - HTTP URLs will be automatically upgraded to HTTPS\n  - For security reasons, the URL's domain must have been provided directly by the user, unless it's on a small pre-approved set of the top few dozen hosts for popular coding resources, like react.dev.\n  - The prompt should describe what information you want to extract from the page\n  - This tool is read-only and does not modify any files\n  - Results may be summarized if the content is very large\n  - Includes a self-cleaning 15-minute cache for faster responses when repeatedly accessing the same URL\n\n\nExample usage:\n{\n  \"invocations\": [\n    {\n      \"tool_name\": \"Bash\",\n      \"input\": {\n        \"command\": \"git blame src/foo.ts\"\n      }\n    },\n    {\n      \"tool_name\": \"GlobTool\",\n      \"input\": {\n        \"pattern\": \"**/*.ts\"\n      }\n    },\n    {\n      \"tool_name\": \"GrepTool\",\n      \"input\": {\n        \"pattern\": \"function\",\n        \"include\": \"*.ts\"\n      }\n    }\n  ]\n}\n",
1914                "inputSchema": {
1915                    "type": "object",
1916                    "properties": {
1917                        "description": {
1918                            "type": "string",
1919                            "description": "A short (3-5 word) description of the batch operation"
1920                        },
1921                        "invocations": {
1922                            "type": "array",
1923                            "items": {
1924                                "type": "object",
1925                                "properties": {
1926                                    "tool_name": {
1927                                        "type": "string",
1928                                        "description": "The name of the tool to invoke"
1929                                    },
1930                                    "input": {
1931                                        "type": "object",
1932                                        "additionalProperties": {},
1933                                        "description": "The input to pass to the tool"
1934                                    }
1935                                },
1936                                "required": [
1937                                    "tool_name",
1938                                    "input"
1939                                ],
1940                                "additionalProperties": false
1941                            },
1942                            "description": "The list of tool invocations to execute"
1943                        }
1944                    },
1945                    "required": [
1946                        "description",
1947                        "invocations"
1948                    ],
1949                    "additionalProperties": false,
1950                    "$schema": "http://json-schema.org/draft-07/schema#"
1951                }
1952            },
1953            {
1954                "name": "GlobTool",
1955                "description": "- Fast file pattern matching tool that works with any codebase size\n- Supports glob patterns like \"**/*.js\" or \"src/**/*.ts\"\n- Returns matching file paths sorted by modification time\n- Use this tool when you need to find files by name patterns\n- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead\n",
1956                "inputSchema": {
1957                    "type": "object",
1958                    "properties": {
1959                        "pattern": {
1960                            "type": "string",
1961                            "description": "The glob pattern to match files against"
1962                        },
1963                        "path": {
1964                            "type": "string",
1965                            "description": "The directory to search in. If not specified, the current working directory will be used. IMPORTANT: Omit this field to use the default directory. DO NOT enter \"undefined\" or \"null\" - simply omit it for the default behavior. Must be a valid directory path if provided."
1966                        }
1967                    },
1968                    "required": [
1969                        "pattern"
1970                    ],
1971                    "additionalProperties": false,
1972                    "$schema": "http://json-schema.org/draft-07/schema#"
1973                }
1974            },
1975            {
1976                "name": "GrepTool",
1977                "description": "\n- Fast content search tool that works with any codebase size\n- Searches file contents using regular expressions\n- Supports full regex syntax (eg. \"log.*Error\", \"function\\s+\\w+\", etc.)\n- Filter files by pattern with the include parameter (eg. \"*.js\", \"*.{ts,tsx}\")\n- Returns matching file paths sorted by modification time\n- Use this tool when you need to find files containing specific patterns\n- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead\n",
1978                "inputSchema": {
1979                    "type": "object",
1980                    "properties": {
1981                        "pattern": {
1982                            "type": "string",
1983                            "description": "The regular expression pattern to search for in file contents"
1984                        },
1985                        "path": {
1986                            "type": "string",
1987                            "description": "The directory to search in. Defaults to the current working directory."
1988                        },
1989                        "include": {
1990                            "type": "string",
1991                            "description": "File pattern to include in the search (e.g. \"*.js\", \"*.{ts,tsx}\")"
1992                        }
1993                    },
1994                    "required": [
1995                        "pattern"
1996                    ],
1997                    "additionalProperties": false,
1998                    "$schema": "http://json-schema.org/draft-07/schema#"
1999                }
2000            },
2001            {
2002                "name": "LS",
2003                "description": "Lists files and directories in a given path. The path parameter must be an absolute path, not a relative path. You can optionally provide an array of glob patterns to ignore with the ignore parameter. You should generally prefer the Glob and Grep tools, if you know which directories to search.",
2004                "inputSchema": {
2005                    "type": "object",
2006                    "properties": {
2007                        "path": {
2008                            "type": "string",
2009                            "description": "The absolute path to the directory to list (must be absolute, not relative)"
2010                        },
2011                        "ignore": {
2012                            "type": "array",
2013                            "items": {
2014                                "type": "string"
2015                            },
2016                            "description": "List of glob patterns to ignore"
2017                        }
2018                    },
2019                    "required": [
2020                        "path"
2021                    ],
2022                    "additionalProperties": false,
2023                    "$schema": "http://json-schema.org/draft-07/schema#"
2024                }
2025            },
2026            {
2027                "name": "View",
2028                "description": "Read a file from the local filesystem.",
2029                "inputSchema": {
2030                    "type": "object",
2031                    "properties": {
2032                        "file_path": {
2033                            "type": "string",
2034                            "description": "The absolute path to the file to read"
2035                        },
2036                        "offset": {
2037                            "type": "number",
2038                            "description": "The line number to start reading from. Only provide if the file is too large to read at once"
2039                        },
2040                        "limit": {
2041                            "type": "number",
2042                            "description": "The number of lines to read. Only provide if the file is too large to read at once."
2043                        }
2044                    },
2045                    "required": [
2046                        "file_path"
2047                    ],
2048                    "additionalProperties": false,
2049                    "$schema": "http://json-schema.org/draft-07/schema#"
2050                }
2051            },
2052            {
2053                "name": "Edit",
2054                "description": "A tool for editing files",
2055                "inputSchema": {
2056                    "type": "object",
2057                    "properties": {
2058                        "file_path": {
2059                            "type": "string",
2060                            "description": "The absolute path to the file to modify"
2061                        },
2062                        "old_string": {
2063                            "type": "string",
2064                            "description": "The text to replace"
2065                        },
2066                        "new_string": {
2067                            "type": "string",
2068                            "description": "The text to replace it with"
2069                        },
2070                        "expected_replacements": {
2071                            "type": "number",
2072                            "default": 1,
2073                            "description": "The expected number of replacements to perform. Defaults to 1 if not specified."
2074                        }
2075                    },
2076                    "required": [
2077                        "file_path",
2078                        "old_string",
2079                        "new_string"
2080                    ],
2081                    "additionalProperties": false,
2082                    "$schema": "http://json-schema.org/draft-07/schema#"
2083                }
2084            },
2085            {
2086                "name": "Replace",
2087                "description": "Write a file to the local filesystem.",
2088                "inputSchema": {
2089                    "type": "object",
2090                    "properties": {
2091                        "file_path": {
2092                            "type": "string",
2093                            "description": "The absolute path to the file to write (must be absolute, not relative)"
2094                        },
2095                        "content": {
2096                            "type": "string",
2097                            "description": "The content to write to the file"
2098                        }
2099                    },
2100                    "required": [
2101                        "file_path",
2102                        "content"
2103                    ],
2104                    "additionalProperties": false,
2105                    "$schema": "http://json-schema.org/draft-07/schema#"
2106                }
2107            },
2108            {
2109                "name": "ReadNotebook",
2110                "description": "Extract and read source code from all code cells in a Jupyter notebook.",
2111                "inputSchema": {
2112                    "type": "object",
2113                    "properties": {
2114                        "notebook_path": {
2115                            "type": "string",
2116                            "description": "The absolute path to the Jupyter notebook file to read (must be absolute, not relative)"
2117                        }
2118                    },
2119                    "required": [
2120                        "notebook_path"
2121                    ],
2122                    "additionalProperties": false,
2123                    "$schema": "http://json-schema.org/draft-07/schema#"
2124                }
2125            },
2126            {
2127                "name": "NotebookEditCell",
2128                "description": "Replace the contents of a specific cell in a Jupyter notebook.",
2129                "inputSchema": {
2130                    "type": "object",
2131                    "properties": {
2132                        "notebook_path": {
2133                            "type": "string",
2134                            "description": "The absolute path to the Jupyter notebook file to edit (must be absolute, not relative)"
2135                        },
2136                        "cell_number": {
2137                            "type": "number",
2138                            "description": "The index of the cell to edit (0-based)"
2139                        },
2140                        "new_source": {
2141                            "type": "string",
2142                            "description": "The new source for the cell"
2143                        },
2144                        "cell_type": {
2145                            "type": "string",
2146                            "enum": [
2147                                "code",
2148                                "markdown"
2149                            ],
2150                            "description": "The type of the cell (code or markdown). If not specified, it defaults to the current cell type. If using edit_mode=insert, this is required."
2151                        },
2152                        "edit_mode": {
2153                            "type": "string",
2154                            "description": "The type of edit to make (replace, insert, delete). Defaults to replace."
2155                        }
2156                    },
2157                    "required": [
2158                        "notebook_path",
2159                        "cell_number",
2160                        "new_source"
2161                    ],
2162                    "additionalProperties": false,
2163                    "$schema": "http://json-schema.org/draft-07/schema#"
2164                }
2165            },
2166            {
2167                "name": "WebFetchTool",
2168                "description": "Claude wants to fetch content from this URL",
2169                "inputSchema": {
2170                    "type": "object",
2171                    "properties": {
2172                        "url": {
2173                            "type": "string",
2174                            "format": "uri",
2175                            "description": "The URL to fetch content from"
2176                        },
2177                        "prompt": {
2178                            "type": "string",
2179                            "description": "The prompt to run on the fetched content"
2180                        }
2181                    },
2182                    "required": [
2183                        "url",
2184                        "prompt"
2185                    ],
2186                    "additionalProperties": false,
2187                    "$schema": "http://json-schema.org/draft-07/schema#"
2188                }
2189            }
2190        ]
2191    }
2192}
2193```